00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 2385 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3646 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.114 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.115 The recommended git tool is: git 00:00:00.115 using credential 00000000-0000-0000-0000-000000000002 00:00:00.116 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.158 Fetching changes from the remote Git repository 00:00:00.159 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.194 Using shallow fetch with depth 1 00:00:00.194 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.194 > git --version # timeout=10 00:00:00.222 > git --version # 'git version 2.39.2' 00:00:00.222 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.237 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.237 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.754 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.766 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.777 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.777 > git config core.sparsecheckout # timeout=10 00:00:04.787 > git read-tree -mu HEAD # timeout=10 00:00:04.803 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.824 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.824 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.925 [Pipeline] Start of Pipeline 00:00:04.938 [Pipeline] library 00:00:04.940 Loading library shm_lib@master 00:00:04.940 Library shm_lib@master is cached. Copying from home. 00:00:04.954 [Pipeline] node 00:00:04.974 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:04.976 [Pipeline] { 00:00:04.984 [Pipeline] catchError 00:00:04.984 [Pipeline] { 00:00:04.995 [Pipeline] wrap 00:00:05.003 [Pipeline] { 00:00:05.007 [Pipeline] stage 00:00:05.008 [Pipeline] { (Prologue) 00:00:05.023 [Pipeline] echo 00:00:05.025 Node: VM-host-SM38 00:00:05.030 [Pipeline] cleanWs 00:00:05.040 [WS-CLEANUP] Deleting project workspace... 00:00:05.040 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.047 [WS-CLEANUP] done 00:00:05.249 [Pipeline] setCustomBuildProperty 00:00:05.318 [Pipeline] httpRequest 00:00:05.943 [Pipeline] echo 00:00:05.945 Sorcerer 10.211.164.20 is alive 00:00:05.954 [Pipeline] retry 00:00:05.956 [Pipeline] { 00:00:05.971 [Pipeline] httpRequest 00:00:05.975 HttpMethod: GET 00:00:05.976 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.976 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.982 Response Code: HTTP/1.1 200 OK 00:00:05.983 Success: Status code 200 is in the accepted range: 200,404 00:00:05.984 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.799 [Pipeline] } 00:00:06.815 [Pipeline] // retry 00:00:06.823 [Pipeline] sh 00:00:07.109 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.125 [Pipeline] httpRequest 00:00:07.525 [Pipeline] echo 00:00:07.527 Sorcerer 10.211.164.20 is alive 00:00:07.537 [Pipeline] retry 00:00:07.539 [Pipeline] { 00:00:07.550 [Pipeline] httpRequest 00:00:07.555 HttpMethod: GET 00:00:07.555 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:07.556 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:07.567 Response Code: HTTP/1.1 200 OK 00:00:07.568 Success: Status code 200 is in the accepted range: 200,404 00:00:07.568 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:40.852 [Pipeline] } 00:00:40.870 [Pipeline] // retry 00:00:40.878 [Pipeline] sh 00:00:41.161 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:44.471 [Pipeline] sh 00:00:44.762 + git -C spdk log --oneline -n5 00:00:44.762 c13c99a5e test: Various fixes for Fedora40 00:00:44.762 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:00:44.762 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:00:44.762 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:00:44.762 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:00:44.783 [Pipeline] writeFile 00:00:44.798 [Pipeline] sh 00:00:45.086 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:45.101 [Pipeline] sh 00:00:45.391 + cat autorun-spdk.conf 00:00:45.392 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:45.392 SPDK_TEST_NVME=1 00:00:45.392 SPDK_TEST_FTL=1 00:00:45.392 SPDK_TEST_ISAL=1 00:00:45.392 SPDK_RUN_ASAN=1 00:00:45.392 SPDK_RUN_UBSAN=1 00:00:45.392 SPDK_TEST_XNVME=1 00:00:45.392 SPDK_TEST_NVME_FDP=1 00:00:45.392 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:45.400 RUN_NIGHTLY=1 00:00:45.402 [Pipeline] } 00:00:45.416 [Pipeline] // stage 00:00:45.430 [Pipeline] stage 00:00:45.433 [Pipeline] { (Run VM) 00:00:45.445 [Pipeline] sh 00:00:45.731 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:45.731 + echo 'Start stage prepare_nvme.sh' 00:00:45.731 Start stage prepare_nvme.sh 00:00:45.731 + [[ -n 1 ]] 00:00:45.731 + disk_prefix=ex1 00:00:45.731 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:00:45.731 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:00:45.731 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:00:45.731 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:45.731 ++ SPDK_TEST_NVME=1 00:00:45.731 ++ SPDK_TEST_FTL=1 00:00:45.731 ++ SPDK_TEST_ISAL=1 00:00:45.731 ++ SPDK_RUN_ASAN=1 00:00:45.731 ++ SPDK_RUN_UBSAN=1 00:00:45.731 ++ SPDK_TEST_XNVME=1 00:00:45.731 ++ SPDK_TEST_NVME_FDP=1 00:00:45.731 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:45.731 ++ RUN_NIGHTLY=1 00:00:45.731 + cd /var/jenkins/workspace/nvme-vg-autotest 00:00:45.731 + nvme_files=() 00:00:45.731 + declare -A nvme_files 00:00:45.731 + backend_dir=/var/lib/libvirt/images/backends 00:00:45.731 + nvme_files['nvme.img']=5G 00:00:45.731 + nvme_files['nvme-cmb.img']=5G 00:00:45.731 + nvme_files['nvme-multi0.img']=4G 00:00:45.731 + nvme_files['nvme-multi1.img']=4G 00:00:45.731 + nvme_files['nvme-multi2.img']=4G 00:00:45.731 + nvme_files['nvme-openstack.img']=8G 00:00:45.731 + nvme_files['nvme-zns.img']=5G 00:00:45.731 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:45.731 + (( SPDK_TEST_FTL == 1 )) 00:00:45.731 + nvme_files["nvme-ftl.img"]=6G 00:00:45.731 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:45.731 + nvme_files["nvme-fdp.img"]=1G 00:00:45.731 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:45.731 + for nvme in "${!nvme_files[@]}" 00:00:45.731 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:00:45.731 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:45.731 + for nvme in "${!nvme_files[@]}" 00:00:45.731 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-ftl.img -s 6G 00:00:46.707 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:46.707 + for nvme in "${!nvme_files[@]}" 00:00:46.707 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:00:46.707 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:46.707 + for nvme in "${!nvme_files[@]}" 00:00:46.707 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:00:46.707 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:46.707 + for nvme in "${!nvme_files[@]}" 00:00:46.707 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:00:47.280 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:47.280 + for nvme in "${!nvme_files[@]}" 00:00:47.280 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:00:47.280 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:47.280 + for nvme in "${!nvme_files[@]}" 00:00:47.280 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:00:47.280 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:47.280 + for nvme in "${!nvme_files[@]}" 00:00:47.280 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-fdp.img -s 1G 00:00:47.542 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:47.542 + for nvme in "${!nvme_files[@]}" 00:00:47.542 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:00:48.114 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:48.114 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:00:48.114 + echo 'End stage prepare_nvme.sh' 00:00:48.114 End stage prepare_nvme.sh 00:00:48.127 [Pipeline] sh 00:00:48.412 + DISTRO=fedora39 00:00:48.412 + CPUS=10 00:00:48.412 + RAM=12288 00:00:48.412 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:48.412 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex1-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:00:48.412 00:00:48.412 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:00:48.412 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:00:48.412 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:00:48.412 HELP=0 00:00:48.412 DRY_RUN=0 00:00:48.412 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,/var/lib/libvirt/images/backends/ex1-nvme-fdp.img, 00:00:48.412 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:48.412 NVME_AUTO_CREATE=0 00:00:48.412 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,, 00:00:48.412 NVME_CMB=,,,, 00:00:48.412 NVME_PMR=,,,, 00:00:48.412 NVME_ZNS=,,,, 00:00:48.412 NVME_MS=true,,,, 00:00:48.412 NVME_FDP=,,,on, 00:00:48.412 SPDK_VAGRANT_DISTRO=fedora39 00:00:48.412 SPDK_VAGRANT_VMCPU=10 00:00:48.412 SPDK_VAGRANT_VMRAM=12288 00:00:48.412 SPDK_VAGRANT_PROVIDER=libvirt 00:00:48.412 SPDK_VAGRANT_HTTP_PROXY= 00:00:48.412 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:48.412 SPDK_OPENSTACK_NETWORK=0 00:00:48.412 VAGRANT_PACKAGE_BOX=0 00:00:48.412 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:48.412 FORCE_DISTRO=true 00:00:48.412 VAGRANT_BOX_VERSION= 00:00:48.412 EXTRA_VAGRANTFILES= 00:00:48.412 NIC_MODEL=e1000 00:00:48.412 00:00:48.412 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:00:48.412 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:00:50.962 Bringing machine 'default' up with 'libvirt' provider... 00:00:51.222 ==> default: Creating image (snapshot of base box volume). 00:00:51.483 ==> default: Creating domain with the following settings... 00:00:51.483 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732024909_72f754d64a52c4c9c9d4 00:00:51.483 ==> default: -- Domain type: kvm 00:00:51.483 ==> default: -- Cpus: 10 00:00:51.483 ==> default: -- Feature: acpi 00:00:51.483 ==> default: -- Feature: apic 00:00:51.483 ==> default: -- Feature: pae 00:00:51.483 ==> default: -- Memory: 12288M 00:00:51.483 ==> default: -- Memory Backing: hugepages: 00:00:51.483 ==> default: -- Management MAC: 00:00:51.483 ==> default: -- Loader: 00:00:51.483 ==> default: -- Nvram: 00:00:51.483 ==> default: -- Base box: spdk/fedora39 00:00:51.483 ==> default: -- Storage pool: default 00:00:51.483 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732024909_72f754d64a52c4c9c9d4.img (20G) 00:00:51.483 ==> default: -- Volume Cache: default 00:00:51.483 ==> default: -- Kernel: 00:00:51.483 ==> default: -- Initrd: 00:00:51.483 ==> default: -- Graphics Type: vnc 00:00:51.483 ==> default: -- Graphics Port: -1 00:00:51.483 ==> default: -- Graphics IP: 127.0.0.1 00:00:51.483 ==> default: -- Graphics Password: Not defined 00:00:51.483 ==> default: -- Video Type: cirrus 00:00:51.483 ==> default: -- Video VRAM: 9216 00:00:51.483 ==> default: -- Sound Type: 00:00:51.483 ==> default: -- Keymap: en-us 00:00:51.483 ==> default: -- TPM Path: 00:00:51.483 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:51.483 ==> default: -- Command line args: 00:00:51.483 ==> default: -> value=-device, 00:00:51.483 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:00:51.483 ==> default: -> value=-drive, 00:00:51.483 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:00:51.483 ==> default: -> value=-device, 00:00:51.483 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:00:51.483 ==> default: -> value=-device, 00:00:51.483 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:00:51.483 ==> default: -> value=-drive, 00:00:51.484 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-1-drive0, 00:00:51.484 ==> default: -> value=-device, 00:00:51.484 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:51.484 ==> default: -> value=-device, 00:00:51.484 ==> default: -> value=nvme,id=nvme-2,serial=12342, 00:00:51.484 ==> default: -> value=-drive, 00:00:51.484 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:00:51.484 ==> default: -> value=-device, 00:00:51.484 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:51.484 ==> default: -> value=-drive, 00:00:51.484 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:00:51.484 ==> default: -> value=-device, 00:00:51.484 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:51.484 ==> default: -> value=-drive, 00:00:51.484 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:00:51.484 ==> default: -> value=-device, 00:00:51.484 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:51.484 ==> default: -> value=-device, 00:00:51.484 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:00:51.484 ==> default: -> value=-device, 00:00:51.484 ==> default: -> value=nvme,id=nvme-3,serial=12343,subsys=fdp-subsys3, 00:00:51.484 ==> default: -> value=-drive, 00:00:51.484 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:00:51.484 ==> default: -> value=-device, 00:00:51.484 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:51.745 ==> default: Creating shared folders metadata... 00:00:51.745 ==> default: Starting domain. 00:00:53.663 ==> default: Waiting for domain to get an IP address... 00:01:15.629 ==> default: Waiting for SSH to become available... 00:01:15.629 ==> default: Configuring and enabling network interfaces... 00:01:17.014 default: SSH address: 192.168.121.194:22 00:01:17.014 default: SSH username: vagrant 00:01:17.014 default: SSH auth method: private key 00:01:19.563 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:27.730 ==> default: Mounting SSHFS shared folder... 00:01:29.115 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:29.115 ==> default: Checking Mount.. 00:01:30.502 ==> default: Folder Successfully Mounted! 00:01:30.502 00:01:30.502 SUCCESS! 00:01:30.502 00:01:30.502 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:30.502 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:30.502 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:30.502 00:01:30.513 [Pipeline] } 00:01:30.528 [Pipeline] // stage 00:01:30.538 [Pipeline] dir 00:01:30.538 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:30.540 [Pipeline] { 00:01:30.552 [Pipeline] catchError 00:01:30.554 [Pipeline] { 00:01:30.566 [Pipeline] sh 00:01:30.858 + vagrant ssh-config --host vagrant 00:01:30.858 + sed -ne '/^Host/,$p' 00:01:30.858 + tee ssh_conf 00:01:33.468 Host vagrant 00:01:33.468 HostName 192.168.121.194 00:01:33.468 User vagrant 00:01:33.468 Port 22 00:01:33.468 UserKnownHostsFile /dev/null 00:01:33.468 StrictHostKeyChecking no 00:01:33.468 PasswordAuthentication no 00:01:33.468 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:33.468 IdentitiesOnly yes 00:01:33.468 LogLevel FATAL 00:01:33.468 ForwardAgent yes 00:01:33.469 ForwardX11 yes 00:01:33.469 00:01:33.484 [Pipeline] withEnv 00:01:33.486 [Pipeline] { 00:01:33.501 [Pipeline] sh 00:01:33.788 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:33.788 source /etc/os-release 00:01:33.788 [[ -e /image.version ]] && img=$(< /image.version) 00:01:33.788 # Minimal, systemd-like check. 00:01:33.788 if [[ -e /.dockerenv ]]; then 00:01:33.788 # Clear garbage from the node'\''s name: 00:01:33.788 # agt-er_autotest_547-896 -> autotest_547-896 00:01:33.788 # $HOSTNAME is the actual container id 00:01:33.788 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:33.788 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:33.788 # We can assume this is a mount from a host where container is running, 00:01:33.788 # so fetch its hostname to easily identify the target swarm worker. 00:01:33.788 container="$(< /etc/hostname) ($agent)" 00:01:33.788 else 00:01:33.788 # Fallback 00:01:33.788 container=$agent 00:01:33.788 fi 00:01:33.788 fi 00:01:33.788 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:33.788 ' 00:01:34.062 [Pipeline] } 00:01:34.080 [Pipeline] // withEnv 00:01:34.089 [Pipeline] setCustomBuildProperty 00:01:34.105 [Pipeline] stage 00:01:34.108 [Pipeline] { (Tests) 00:01:34.126 [Pipeline] sh 00:01:34.413 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:34.688 [Pipeline] sh 00:01:34.973 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:35.286 [Pipeline] timeout 00:01:35.287 Timeout set to expire in 50 min 00:01:35.290 [Pipeline] { 00:01:35.305 [Pipeline] sh 00:01:35.591 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:01:36.161 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:01:36.173 [Pipeline] sh 00:01:36.453 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:01:36.725 [Pipeline] sh 00:01:37.061 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:37.088 [Pipeline] sh 00:01:37.371 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:01:37.633 ++ readlink -f spdk_repo 00:01:37.633 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:37.633 + [[ -n /home/vagrant/spdk_repo ]] 00:01:37.633 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:37.633 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:37.633 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:37.633 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:37.633 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:37.633 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:37.633 + cd /home/vagrant/spdk_repo 00:01:37.633 + source /etc/os-release 00:01:37.633 ++ NAME='Fedora Linux' 00:01:37.633 ++ VERSION='39 (Cloud Edition)' 00:01:37.633 ++ ID=fedora 00:01:37.633 ++ VERSION_ID=39 00:01:37.633 ++ VERSION_CODENAME= 00:01:37.633 ++ PLATFORM_ID=platform:f39 00:01:37.633 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:37.633 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:37.633 ++ LOGO=fedora-logo-icon 00:01:37.633 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:37.633 ++ HOME_URL=https://fedoraproject.org/ 00:01:37.633 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:37.633 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:37.633 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:37.633 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:37.633 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:37.633 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:37.633 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:37.633 ++ SUPPORT_END=2024-11-12 00:01:37.633 ++ VARIANT='Cloud Edition' 00:01:37.633 ++ VARIANT_ID=cloud 00:01:37.633 + uname -a 00:01:37.633 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:37.633 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:37.633 Hugepages 00:01:37.633 node hugesize free / total 00:01:37.633 node0 1048576kB 0 / 0 00:01:37.633 node0 2048kB 0 / 0 00:01:37.633 00:01:37.633 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:37.633 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:37.633 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:37.633 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:37.894 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:01:37.894 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:37.894 + rm -f /tmp/spdk-ld-path 00:01:37.894 + source autorun-spdk.conf 00:01:37.894 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.894 ++ SPDK_TEST_NVME=1 00:01:37.894 ++ SPDK_TEST_FTL=1 00:01:37.894 ++ SPDK_TEST_ISAL=1 00:01:37.894 ++ SPDK_RUN_ASAN=1 00:01:37.894 ++ SPDK_RUN_UBSAN=1 00:01:37.894 ++ SPDK_TEST_XNVME=1 00:01:37.894 ++ SPDK_TEST_NVME_FDP=1 00:01:37.894 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:37.894 ++ RUN_NIGHTLY=1 00:01:37.894 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:37.894 + [[ -n '' ]] 00:01:37.894 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:37.894 + for M in /var/spdk/build-*-manifest.txt 00:01:37.894 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:37.894 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:37.894 + for M in /var/spdk/build-*-manifest.txt 00:01:37.894 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:37.894 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:37.894 + for M in /var/spdk/build-*-manifest.txt 00:01:37.894 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:37.894 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:37.894 ++ uname 00:01:37.894 + [[ Linux == \L\i\n\u\x ]] 00:01:37.894 + sudo dmesg -T 00:01:37.894 + sudo dmesg --clear 00:01:37.894 + dmesg_pid=4996 00:01:37.894 + [[ Fedora Linux == FreeBSD ]] 00:01:37.894 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:37.894 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:37.894 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:37.894 + [[ -x /usr/src/fio-static/fio ]] 00:01:37.894 + sudo dmesg -Tw 00:01:37.894 + export FIO_BIN=/usr/src/fio-static/fio 00:01:37.894 + FIO_BIN=/usr/src/fio-static/fio 00:01:37.894 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:37.894 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:37.894 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:37.894 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:37.894 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:37.894 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:37.894 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:37.894 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:37.894 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:37.894 Test configuration: 00:01:37.894 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.894 SPDK_TEST_NVME=1 00:01:37.894 SPDK_TEST_FTL=1 00:01:37.894 SPDK_TEST_ISAL=1 00:01:37.894 SPDK_RUN_ASAN=1 00:01:37.894 SPDK_RUN_UBSAN=1 00:01:37.894 SPDK_TEST_XNVME=1 00:01:37.894 SPDK_TEST_NVME_FDP=1 00:01:37.894 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:37.894 RUN_NIGHTLY=1 14:02:36 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:01:37.894 14:02:36 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:37.894 14:02:36 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:37.894 14:02:36 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:37.894 14:02:36 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:37.894 14:02:36 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.894 14:02:36 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.894 14:02:36 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.894 14:02:36 -- paths/export.sh@5 -- $ export PATH 00:01:37.895 14:02:36 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.895 14:02:36 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:37.895 14:02:36 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:37.895 14:02:36 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732024956.XXXXXX 00:01:37.895 14:02:36 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732024956.GBO1vS 00:01:37.895 14:02:36 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:37.895 14:02:36 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:01:37.895 14:02:36 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:37.895 14:02:36 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:37.895 14:02:36 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:37.895 14:02:36 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:37.895 14:02:36 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:01:37.895 14:02:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.156 14:02:36 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:38.156 14:02:36 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:38.156 14:02:36 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:38.156 14:02:36 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:38.156 14:02:36 -- spdk/autobuild.sh@16 -- $ date -u 00:01:38.156 Tue Nov 19 02:02:36 PM UTC 2024 00:01:38.156 14:02:36 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:38.156 LTS-67-gc13c99a5e 00:01:38.156 14:02:36 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:38.156 14:02:36 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:38.156 14:02:36 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:38.156 14:02:36 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:38.156 14:02:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.156 ************************************ 00:01:38.156 START TEST asan 00:01:38.156 ************************************ 00:01:38.156 using asan 00:01:38.156 14:02:36 -- common/autotest_common.sh@1114 -- $ echo 'using asan' 00:01:38.156 00:01:38.156 real 0m0.000s 00:01:38.156 user 0m0.000s 00:01:38.156 sys 0m0.000s 00:01:38.156 14:02:36 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:38.156 ************************************ 00:01:38.156 END TEST asan 00:01:38.156 ************************************ 00:01:38.156 14:02:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.156 14:02:36 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:38.156 14:02:36 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:38.156 14:02:36 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:38.156 14:02:36 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:38.156 14:02:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.156 ************************************ 00:01:38.156 START TEST ubsan 00:01:38.156 ************************************ 00:01:38.156 using ubsan 00:01:38.156 14:02:36 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:01:38.156 00:01:38.156 real 0m0.000s 00:01:38.156 user 0m0.000s 00:01:38.156 sys 0m0.000s 00:01:38.156 14:02:36 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:38.156 ************************************ 00:01:38.156 END TEST ubsan 00:01:38.156 ************************************ 00:01:38.156 14:02:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.156 14:02:36 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:38.156 14:02:36 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:38.156 14:02:36 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:38.156 14:02:36 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:38.156 14:02:36 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:38.156 14:02:36 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:38.156 14:02:36 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:38.156 14:02:36 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:38.156 14:02:36 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:38.156 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:38.156 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:38.729 Using 'verbs' RDMA provider 00:01:51.542 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:01.548 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:01.548 Creating mk/config.mk...done. 00:02:01.548 Creating mk/cc.flags.mk...done. 00:02:01.548 Type 'make' to build. 00:02:01.548 14:03:00 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:01.548 14:03:00 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:01.548 14:03:00 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:01.548 14:03:00 -- common/autotest_common.sh@10 -- $ set +x 00:02:01.548 ************************************ 00:02:01.548 START TEST make 00:02:01.548 ************************************ 00:02:01.548 14:03:00 -- common/autotest_common.sh@1114 -- $ make -j10 00:02:01.808 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:01.808 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:01.808 meson setup builddir \ 00:02:01.808 -Dwith-libaio=enabled \ 00:02:01.808 -Dwith-liburing=enabled \ 00:02:01.808 -Dwith-libvfn=disabled \ 00:02:01.808 -Dwith-spdk=false && \ 00:02:01.808 meson compile -C builddir && \ 00:02:01.808 cd -) 00:02:01.808 make[1]: Nothing to be done for 'all'. 00:02:04.364 The Meson build system 00:02:04.364 Version: 1.5.0 00:02:04.364 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:04.364 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:04.364 Build type: native build 00:02:04.364 Project name: xnvme 00:02:04.364 Project version: 0.7.3 00:02:04.364 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:04.364 C linker for the host machine: cc ld.bfd 2.40-14 00:02:04.364 Host machine cpu family: x86_64 00:02:04.364 Host machine cpu: x86_64 00:02:04.364 Message: host_machine.system: linux 00:02:04.364 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:04.364 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:04.364 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:04.364 Run-time dependency threads found: YES 00:02:04.364 Has header "setupapi.h" : NO 00:02:04.364 Has header "linux/blkzoned.h" : YES 00:02:04.364 Has header "linux/blkzoned.h" : YES (cached) 00:02:04.364 Has header "libaio.h" : YES 00:02:04.364 Library aio found: YES 00:02:04.364 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:04.364 Run-time dependency liburing found: YES 2.2 00:02:04.364 Dependency libvfn skipped: feature with-libvfn disabled 00:02:04.364 Run-time dependency appleframeworks found: NO (tried framework) 00:02:04.364 Run-time dependency appleframeworks found: NO (tried framework) 00:02:04.364 Configuring xnvme_config.h using configuration 00:02:04.364 Configuring xnvme.spec using configuration 00:02:04.364 Run-time dependency bash-completion found: YES 2.11 00:02:04.364 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:04.364 Program cp found: YES (/usr/bin/cp) 00:02:04.364 Has header "winsock2.h" : NO 00:02:04.364 Has header "dbghelp.h" : NO 00:02:04.364 Library rpcrt4 found: NO 00:02:04.364 Library rt found: YES 00:02:04.364 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:04.364 Found CMake: /usr/bin/cmake (3.27.7) 00:02:04.364 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:02:04.364 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:02:04.364 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:02:04.364 Build targets in project: 32 00:02:04.364 00:02:04.364 xnvme 0.7.3 00:02:04.364 00:02:04.364 User defined options 00:02:04.364 with-libaio : enabled 00:02:04.364 with-liburing: enabled 00:02:04.364 with-libvfn : disabled 00:02:04.364 with-spdk : false 00:02:04.364 00:02:04.364 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:04.364 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:04.622 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:02:04.622 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:02:04.622 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:02:04.622 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:02:04.622 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:02:04.622 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:02:04.622 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:02:04.622 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:02:04.622 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:02:04.622 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:02:04.622 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:02:04.622 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:02:04.622 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:02:04.622 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:02:04.622 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:02:04.622 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:02:04.622 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:02:04.622 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:02:04.622 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:02:04.622 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:02:04.622 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:02:04.882 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:02:04.882 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:02:04.882 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:02:04.882 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:02:04.882 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:02:04.882 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:02:04.882 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:02:04.882 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:02:04.882 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:02:04.882 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:02:04.883 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:02:04.883 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:02:04.883 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:02:04.883 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:02:04.883 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:02:04.883 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:02:04.883 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:02:04.883 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:02:04.883 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:02:04.883 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:02:04.883 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:02:04.883 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:02:04.883 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:02:04.883 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:02:04.883 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:02:04.883 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:02:04.883 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:02:04.883 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:02:04.883 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:02:04.883 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:02:04.883 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:02:04.883 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:02:04.883 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:02:04.883 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:02:04.883 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:02:04.883 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:02:04.883 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:02:04.883 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:02:05.142 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:02:05.142 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:02:05.142 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:02:05.142 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:02:05.142 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:02:05.142 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:02:05.142 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:02:05.142 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:02:05.142 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:02:05.142 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:02:05.142 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:02:05.142 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:02:05.142 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:02:05.142 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:02:05.142 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:02:05.142 [75/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:02:05.142 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:02:05.142 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:02:05.142 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:02:05.142 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:02:05.142 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:02:05.399 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:02:05.399 [82/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:02:05.399 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:02:05.399 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:02:05.399 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:02:05.399 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:02:05.399 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:02:05.399 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:02:05.399 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:02:05.399 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:02:05.399 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:02:05.399 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:02:05.399 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:02:05.399 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:02:05.399 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:02:05.399 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:02:05.399 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:02:05.399 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:02:05.399 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:02:05.399 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:02:05.399 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:02:05.399 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:02:05.399 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:02:05.399 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:02:05.399 [105/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:02:05.399 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:02:05.399 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:02:05.657 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:02:05.657 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:02:05.657 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:02:05.657 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:02:05.657 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:02:05.657 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:02:05.657 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:02:05.657 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:02:05.657 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:02:05.657 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:02:05.657 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:02:05.657 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:02:05.657 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:02:05.657 [121/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:02:05.657 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:02:05.657 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:02:05.657 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:02:05.657 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:02:05.657 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:02:05.657 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:02:05.657 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:02:05.657 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:02:05.657 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:02:05.657 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:02:05.657 [132/203] Linking target lib/libxnvme.so 00:02:05.657 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:02:05.657 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:02:05.657 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:02:05.657 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:02:05.657 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:02:05.915 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:02:05.915 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:02:05.915 [140/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:02:05.915 [141/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:02:05.915 [142/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:02:05.915 [143/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:02:05.915 [144/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:02:05.915 [145/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:02:05.915 [146/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:02:05.915 [147/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:02:05.915 [148/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:02:05.915 [149/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:02:05.915 [150/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:02:05.915 [151/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:02:05.915 [152/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:02:05.915 [153/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:02:05.915 [154/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:02:05.915 [155/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:02:06.173 [156/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:02:06.173 [157/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:02:06.173 [158/203] Compiling C object tools/lblk.p/lblk.c.o 00:02:06.173 [159/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:02:06.173 [160/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:02:06.173 [161/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:02:06.173 [162/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:02:06.173 [163/203] Compiling C object tools/xdd.p/xdd.c.o 00:02:06.173 [164/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:02:06.173 [165/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:02:06.173 [166/203] Compiling C object tools/kvs.p/kvs.c.o 00:02:06.173 [167/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:02:06.173 [168/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:02:06.173 [169/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:02:06.173 [170/203] Compiling C object tools/zoned.p/zoned.c.o 00:02:06.431 [171/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:02:06.431 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:02:06.431 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:02:06.431 [174/203] Linking static target lib/libxnvme.a 00:02:06.431 [175/203] Linking target tests/xnvme_tests_cli 00:02:06.431 [176/203] Linking target tests/xnvme_tests_async_intf 00:02:06.431 [177/203] Linking target tests/xnvme_tests_lblk 00:02:06.431 [178/203] Linking target tests/xnvme_tests_buf 00:02:06.431 [179/203] Linking target tests/xnvme_tests_ioworker 00:02:06.431 [180/203] Linking target tests/xnvme_tests_xnvme_cli 00:02:06.431 [181/203] Linking target tests/xnvme_tests_znd_explicit_open 00:02:06.431 [182/203] Linking target tests/xnvme_tests_znd_append 00:02:06.431 [183/203] Linking target tests/xnvme_tests_enum 00:02:06.431 [184/203] Linking target tests/xnvme_tests_scc 00:02:06.431 [185/203] Linking target tests/xnvme_tests_znd_state 00:02:06.431 [186/203] Linking target tests/xnvme_tests_xnvme_file 00:02:06.431 [187/203] Linking target tests/xnvme_tests_kvs 00:02:06.431 [188/203] Linking target tests/xnvme_tests_znd_zrwa 00:02:06.431 [189/203] Linking target tests/xnvme_tests_map 00:02:06.431 [190/203] Linking target tools/xdd 00:02:06.431 [191/203] Linking target tools/xnvme 00:02:06.431 [192/203] Linking target examples/xnvme_io_async 00:02:06.431 [193/203] Linking target examples/xnvme_enum 00:02:06.431 [194/203] Linking target tools/lblk 00:02:06.431 [195/203] Linking target tools/zoned 00:02:06.431 [196/203] Linking target examples/xnvme_dev 00:02:06.431 [197/203] Linking target tools/xnvme_file 00:02:06.431 [198/203] Linking target examples/xnvme_hello 00:02:06.431 [199/203] Linking target tools/kvs 00:02:06.689 [200/203] Linking target examples/zoned_io_sync 00:02:06.689 [201/203] Linking target examples/xnvme_single_async 00:02:06.689 [202/203] Linking target examples/xnvme_single_sync 00:02:06.689 [203/203] Linking target examples/zoned_io_async 00:02:06.689 INFO: autodetecting backend as ninja 00:02:06.689 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:06.689 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:13.244 The Meson build system 00:02:13.244 Version: 1.5.0 00:02:13.244 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:13.244 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:13.244 Build type: native build 00:02:13.244 Program cat found: YES (/usr/bin/cat) 00:02:13.244 Project name: DPDK 00:02:13.244 Project version: 23.11.0 00:02:13.244 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:13.244 C linker for the host machine: cc ld.bfd 2.40-14 00:02:13.244 Host machine cpu family: x86_64 00:02:13.244 Host machine cpu: x86_64 00:02:13.244 Message: ## Building in Developer Mode ## 00:02:13.244 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:13.244 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:13.244 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:13.244 Program python3 found: YES (/usr/bin/python3) 00:02:13.244 Program cat found: YES (/usr/bin/cat) 00:02:13.244 Compiler for C supports arguments -march=native: YES 00:02:13.244 Checking for size of "void *" : 8 00:02:13.244 Checking for size of "void *" : 8 (cached) 00:02:13.244 Library m found: YES 00:02:13.244 Library numa found: YES 00:02:13.244 Has header "numaif.h" : YES 00:02:13.244 Library fdt found: NO 00:02:13.244 Library execinfo found: NO 00:02:13.244 Has header "execinfo.h" : YES 00:02:13.244 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:13.244 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:13.244 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:13.244 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:13.244 Run-time dependency openssl found: YES 3.1.1 00:02:13.245 Run-time dependency libpcap found: YES 1.10.4 00:02:13.245 Has header "pcap.h" with dependency libpcap: YES 00:02:13.245 Compiler for C supports arguments -Wcast-qual: YES 00:02:13.245 Compiler for C supports arguments -Wdeprecated: YES 00:02:13.245 Compiler for C supports arguments -Wformat: YES 00:02:13.245 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:13.245 Compiler for C supports arguments -Wformat-security: NO 00:02:13.245 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:13.245 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:13.245 Compiler for C supports arguments -Wnested-externs: YES 00:02:13.245 Compiler for C supports arguments -Wold-style-definition: YES 00:02:13.245 Compiler for C supports arguments -Wpointer-arith: YES 00:02:13.245 Compiler for C supports arguments -Wsign-compare: YES 00:02:13.245 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:13.245 Compiler for C supports arguments -Wundef: YES 00:02:13.245 Compiler for C supports arguments -Wwrite-strings: YES 00:02:13.245 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:13.245 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:13.245 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:13.245 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:13.245 Program objdump found: YES (/usr/bin/objdump) 00:02:13.245 Compiler for C supports arguments -mavx512f: YES 00:02:13.245 Checking if "AVX512 checking" compiles: YES 00:02:13.245 Fetching value of define "__SSE4_2__" : 1 00:02:13.245 Fetching value of define "__AES__" : 1 00:02:13.245 Fetching value of define "__AVX__" : 1 00:02:13.245 Fetching value of define "__AVX2__" : 1 00:02:13.245 Fetching value of define "__AVX512BW__" : 1 00:02:13.245 Fetching value of define "__AVX512CD__" : 1 00:02:13.245 Fetching value of define "__AVX512DQ__" : 1 00:02:13.245 Fetching value of define "__AVX512F__" : 1 00:02:13.245 Fetching value of define "__AVX512VL__" : 1 00:02:13.245 Fetching value of define "__PCLMUL__" : 1 00:02:13.245 Fetching value of define "__RDRND__" : 1 00:02:13.245 Fetching value of define "__RDSEED__" : 1 00:02:13.245 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:13.245 Fetching value of define "__znver1__" : (undefined) 00:02:13.245 Fetching value of define "__znver2__" : (undefined) 00:02:13.245 Fetching value of define "__znver3__" : (undefined) 00:02:13.245 Fetching value of define "__znver4__" : (undefined) 00:02:13.245 Library asan found: YES 00:02:13.245 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:13.245 Message: lib/log: Defining dependency "log" 00:02:13.245 Message: lib/kvargs: Defining dependency "kvargs" 00:02:13.245 Message: lib/telemetry: Defining dependency "telemetry" 00:02:13.245 Library rt found: YES 00:02:13.245 Checking for function "getentropy" : NO 00:02:13.245 Message: lib/eal: Defining dependency "eal" 00:02:13.245 Message: lib/ring: Defining dependency "ring" 00:02:13.245 Message: lib/rcu: Defining dependency "rcu" 00:02:13.245 Message: lib/mempool: Defining dependency "mempool" 00:02:13.245 Message: lib/mbuf: Defining dependency "mbuf" 00:02:13.245 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:13.245 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:13.245 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:13.245 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:13.245 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:13.245 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:13.245 Compiler for C supports arguments -mpclmul: YES 00:02:13.245 Compiler for C supports arguments -maes: YES 00:02:13.245 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:13.245 Compiler for C supports arguments -mavx512bw: YES 00:02:13.245 Compiler for C supports arguments -mavx512dq: YES 00:02:13.245 Compiler for C supports arguments -mavx512vl: YES 00:02:13.245 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:13.245 Compiler for C supports arguments -mavx2: YES 00:02:13.245 Compiler for C supports arguments -mavx: YES 00:02:13.245 Message: lib/net: Defining dependency "net" 00:02:13.245 Message: lib/meter: Defining dependency "meter" 00:02:13.245 Message: lib/ethdev: Defining dependency "ethdev" 00:02:13.245 Message: lib/pci: Defining dependency "pci" 00:02:13.245 Message: lib/cmdline: Defining dependency "cmdline" 00:02:13.245 Message: lib/hash: Defining dependency "hash" 00:02:13.245 Message: lib/timer: Defining dependency "timer" 00:02:13.245 Message: lib/compressdev: Defining dependency "compressdev" 00:02:13.245 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:13.245 Message: lib/dmadev: Defining dependency "dmadev" 00:02:13.245 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:13.245 Message: lib/power: Defining dependency "power" 00:02:13.245 Message: lib/reorder: Defining dependency "reorder" 00:02:13.245 Message: lib/security: Defining dependency "security" 00:02:13.245 Has header "linux/userfaultfd.h" : YES 00:02:13.245 Has header "linux/vduse.h" : YES 00:02:13.245 Message: lib/vhost: Defining dependency "vhost" 00:02:13.245 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:13.245 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:13.245 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:13.245 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:13.245 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:13.245 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:13.245 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:13.245 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:13.245 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:13.245 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:13.245 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:13.245 Configuring doxy-api-html.conf using configuration 00:02:13.245 Configuring doxy-api-man.conf using configuration 00:02:13.245 Program mandb found: YES (/usr/bin/mandb) 00:02:13.245 Program sphinx-build found: NO 00:02:13.245 Configuring rte_build_config.h using configuration 00:02:13.245 Message: 00:02:13.245 ================= 00:02:13.245 Applications Enabled 00:02:13.245 ================= 00:02:13.245 00:02:13.245 apps: 00:02:13.245 00:02:13.245 00:02:13.245 Message: 00:02:13.245 ================= 00:02:13.245 Libraries Enabled 00:02:13.245 ================= 00:02:13.245 00:02:13.245 libs: 00:02:13.245 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:13.245 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:13.245 cryptodev, dmadev, power, reorder, security, vhost, 00:02:13.245 00:02:13.245 Message: 00:02:13.245 =============== 00:02:13.245 Drivers Enabled 00:02:13.245 =============== 00:02:13.245 00:02:13.245 common: 00:02:13.245 00:02:13.245 bus: 00:02:13.245 pci, vdev, 00:02:13.245 mempool: 00:02:13.245 ring, 00:02:13.245 dma: 00:02:13.245 00:02:13.245 net: 00:02:13.245 00:02:13.245 crypto: 00:02:13.245 00:02:13.245 compress: 00:02:13.245 00:02:13.245 vdpa: 00:02:13.245 00:02:13.245 00:02:13.245 Message: 00:02:13.245 ================= 00:02:13.245 Content Skipped 00:02:13.245 ================= 00:02:13.245 00:02:13.245 apps: 00:02:13.245 dumpcap: explicitly disabled via build config 00:02:13.245 graph: explicitly disabled via build config 00:02:13.245 pdump: explicitly disabled via build config 00:02:13.245 proc-info: explicitly disabled via build config 00:02:13.245 test-acl: explicitly disabled via build config 00:02:13.245 test-bbdev: explicitly disabled via build config 00:02:13.245 test-cmdline: explicitly disabled via build config 00:02:13.245 test-compress-perf: explicitly disabled via build config 00:02:13.245 test-crypto-perf: explicitly disabled via build config 00:02:13.245 test-dma-perf: explicitly disabled via build config 00:02:13.245 test-eventdev: explicitly disabled via build config 00:02:13.245 test-fib: explicitly disabled via build config 00:02:13.245 test-flow-perf: explicitly disabled via build config 00:02:13.245 test-gpudev: explicitly disabled via build config 00:02:13.245 test-mldev: explicitly disabled via build config 00:02:13.245 test-pipeline: explicitly disabled via build config 00:02:13.245 test-pmd: explicitly disabled via build config 00:02:13.245 test-regex: explicitly disabled via build config 00:02:13.245 test-sad: explicitly disabled via build config 00:02:13.245 test-security-perf: explicitly disabled via build config 00:02:13.245 00:02:13.245 libs: 00:02:13.245 metrics: explicitly disabled via build config 00:02:13.245 acl: explicitly disabled via build config 00:02:13.245 bbdev: explicitly disabled via build config 00:02:13.245 bitratestats: explicitly disabled via build config 00:02:13.245 bpf: explicitly disabled via build config 00:02:13.245 cfgfile: explicitly disabled via build config 00:02:13.245 distributor: explicitly disabled via build config 00:02:13.245 efd: explicitly disabled via build config 00:02:13.245 eventdev: explicitly disabled via build config 00:02:13.245 dispatcher: explicitly disabled via build config 00:02:13.245 gpudev: explicitly disabled via build config 00:02:13.245 gro: explicitly disabled via build config 00:02:13.245 gso: explicitly disabled via build config 00:02:13.245 ip_frag: explicitly disabled via build config 00:02:13.245 jobstats: explicitly disabled via build config 00:02:13.245 latencystats: explicitly disabled via build config 00:02:13.245 lpm: explicitly disabled via build config 00:02:13.245 member: explicitly disabled via build config 00:02:13.245 pcapng: explicitly disabled via build config 00:02:13.245 rawdev: explicitly disabled via build config 00:02:13.245 regexdev: explicitly disabled via build config 00:02:13.245 mldev: explicitly disabled via build config 00:02:13.245 rib: explicitly disabled via build config 00:02:13.245 sched: explicitly disabled via build config 00:02:13.245 stack: explicitly disabled via build config 00:02:13.245 ipsec: explicitly disabled via build config 00:02:13.245 pdcp: explicitly disabled via build config 00:02:13.245 fib: explicitly disabled via build config 00:02:13.246 port: explicitly disabled via build config 00:02:13.246 pdump: explicitly disabled via build config 00:02:13.246 table: explicitly disabled via build config 00:02:13.246 pipeline: explicitly disabled via build config 00:02:13.246 graph: explicitly disabled via build config 00:02:13.246 node: explicitly disabled via build config 00:02:13.246 00:02:13.246 drivers: 00:02:13.246 common/cpt: not in enabled drivers build config 00:02:13.246 common/dpaax: not in enabled drivers build config 00:02:13.246 common/iavf: not in enabled drivers build config 00:02:13.246 common/idpf: not in enabled drivers build config 00:02:13.246 common/mvep: not in enabled drivers build config 00:02:13.246 common/octeontx: not in enabled drivers build config 00:02:13.246 bus/auxiliary: not in enabled drivers build config 00:02:13.246 bus/cdx: not in enabled drivers build config 00:02:13.246 bus/dpaa: not in enabled drivers build config 00:02:13.246 bus/fslmc: not in enabled drivers build config 00:02:13.246 bus/ifpga: not in enabled drivers build config 00:02:13.246 bus/platform: not in enabled drivers build config 00:02:13.246 bus/vmbus: not in enabled drivers build config 00:02:13.246 common/cnxk: not in enabled drivers build config 00:02:13.246 common/mlx5: not in enabled drivers build config 00:02:13.246 common/nfp: not in enabled drivers build config 00:02:13.246 common/qat: not in enabled drivers build config 00:02:13.246 common/sfc_efx: not in enabled drivers build config 00:02:13.246 mempool/bucket: not in enabled drivers build config 00:02:13.246 mempool/cnxk: not in enabled drivers build config 00:02:13.246 mempool/dpaa: not in enabled drivers build config 00:02:13.246 mempool/dpaa2: not in enabled drivers build config 00:02:13.246 mempool/octeontx: not in enabled drivers build config 00:02:13.246 mempool/stack: not in enabled drivers build config 00:02:13.246 dma/cnxk: not in enabled drivers build config 00:02:13.246 dma/dpaa: not in enabled drivers build config 00:02:13.246 dma/dpaa2: not in enabled drivers build config 00:02:13.246 dma/hisilicon: not in enabled drivers build config 00:02:13.246 dma/idxd: not in enabled drivers build config 00:02:13.246 dma/ioat: not in enabled drivers build config 00:02:13.246 dma/skeleton: not in enabled drivers build config 00:02:13.246 net/af_packet: not in enabled drivers build config 00:02:13.246 net/af_xdp: not in enabled drivers build config 00:02:13.246 net/ark: not in enabled drivers build config 00:02:13.246 net/atlantic: not in enabled drivers build config 00:02:13.246 net/avp: not in enabled drivers build config 00:02:13.246 net/axgbe: not in enabled drivers build config 00:02:13.246 net/bnx2x: not in enabled drivers build config 00:02:13.246 net/bnxt: not in enabled drivers build config 00:02:13.246 net/bonding: not in enabled drivers build config 00:02:13.246 net/cnxk: not in enabled drivers build config 00:02:13.246 net/cpfl: not in enabled drivers build config 00:02:13.246 net/cxgbe: not in enabled drivers build config 00:02:13.246 net/dpaa: not in enabled drivers build config 00:02:13.246 net/dpaa2: not in enabled drivers build config 00:02:13.246 net/e1000: not in enabled drivers build config 00:02:13.246 net/ena: not in enabled drivers build config 00:02:13.246 net/enetc: not in enabled drivers build config 00:02:13.246 net/enetfec: not in enabled drivers build config 00:02:13.246 net/enic: not in enabled drivers build config 00:02:13.246 net/failsafe: not in enabled drivers build config 00:02:13.246 net/fm10k: not in enabled drivers build config 00:02:13.246 net/gve: not in enabled drivers build config 00:02:13.246 net/hinic: not in enabled drivers build config 00:02:13.246 net/hns3: not in enabled drivers build config 00:02:13.246 net/i40e: not in enabled drivers build config 00:02:13.246 net/iavf: not in enabled drivers build config 00:02:13.246 net/ice: not in enabled drivers build config 00:02:13.246 net/idpf: not in enabled drivers build config 00:02:13.246 net/igc: not in enabled drivers build config 00:02:13.246 net/ionic: not in enabled drivers build config 00:02:13.246 net/ipn3ke: not in enabled drivers build config 00:02:13.246 net/ixgbe: not in enabled drivers build config 00:02:13.246 net/mana: not in enabled drivers build config 00:02:13.246 net/memif: not in enabled drivers build config 00:02:13.246 net/mlx4: not in enabled drivers build config 00:02:13.246 net/mlx5: not in enabled drivers build config 00:02:13.246 net/mvneta: not in enabled drivers build config 00:02:13.246 net/mvpp2: not in enabled drivers build config 00:02:13.246 net/netvsc: not in enabled drivers build config 00:02:13.246 net/nfb: not in enabled drivers build config 00:02:13.246 net/nfp: not in enabled drivers build config 00:02:13.246 net/ngbe: not in enabled drivers build config 00:02:13.246 net/null: not in enabled drivers build config 00:02:13.246 net/octeontx: not in enabled drivers build config 00:02:13.246 net/octeon_ep: not in enabled drivers build config 00:02:13.246 net/pcap: not in enabled drivers build config 00:02:13.246 net/pfe: not in enabled drivers build config 00:02:13.246 net/qede: not in enabled drivers build config 00:02:13.246 net/ring: not in enabled drivers build config 00:02:13.246 net/sfc: not in enabled drivers build config 00:02:13.246 net/softnic: not in enabled drivers build config 00:02:13.246 net/tap: not in enabled drivers build config 00:02:13.246 net/thunderx: not in enabled drivers build config 00:02:13.246 net/txgbe: not in enabled drivers build config 00:02:13.246 net/vdev_netvsc: not in enabled drivers build config 00:02:13.246 net/vhost: not in enabled drivers build config 00:02:13.246 net/virtio: not in enabled drivers build config 00:02:13.246 net/vmxnet3: not in enabled drivers build config 00:02:13.246 raw/*: missing internal dependency, "rawdev" 00:02:13.246 crypto/armv8: not in enabled drivers build config 00:02:13.246 crypto/bcmfs: not in enabled drivers build config 00:02:13.246 crypto/caam_jr: not in enabled drivers build config 00:02:13.246 crypto/ccp: not in enabled drivers build config 00:02:13.246 crypto/cnxk: not in enabled drivers build config 00:02:13.246 crypto/dpaa_sec: not in enabled drivers build config 00:02:13.246 crypto/dpaa2_sec: not in enabled drivers build config 00:02:13.246 crypto/ipsec_mb: not in enabled drivers build config 00:02:13.246 crypto/mlx5: not in enabled drivers build config 00:02:13.246 crypto/mvsam: not in enabled drivers build config 00:02:13.246 crypto/nitrox: not in enabled drivers build config 00:02:13.246 crypto/null: not in enabled drivers build config 00:02:13.246 crypto/octeontx: not in enabled drivers build config 00:02:13.246 crypto/openssl: not in enabled drivers build config 00:02:13.246 crypto/scheduler: not in enabled drivers build config 00:02:13.246 crypto/uadk: not in enabled drivers build config 00:02:13.246 crypto/virtio: not in enabled drivers build config 00:02:13.246 compress/isal: not in enabled drivers build config 00:02:13.246 compress/mlx5: not in enabled drivers build config 00:02:13.246 compress/octeontx: not in enabled drivers build config 00:02:13.246 compress/zlib: not in enabled drivers build config 00:02:13.246 regex/*: missing internal dependency, "regexdev" 00:02:13.246 ml/*: missing internal dependency, "mldev" 00:02:13.246 vdpa/ifc: not in enabled drivers build config 00:02:13.246 vdpa/mlx5: not in enabled drivers build config 00:02:13.246 vdpa/nfp: not in enabled drivers build config 00:02:13.246 vdpa/sfc: not in enabled drivers build config 00:02:13.246 event/*: missing internal dependency, "eventdev" 00:02:13.246 baseband/*: missing internal dependency, "bbdev" 00:02:13.246 gpu/*: missing internal dependency, "gpudev" 00:02:13.246 00:02:13.246 00:02:13.246 Build targets in project: 84 00:02:13.246 00:02:13.246 DPDK 23.11.0 00:02:13.246 00:02:13.246 User defined options 00:02:13.246 buildtype : debug 00:02:13.246 default_library : shared 00:02:13.246 libdir : lib 00:02:13.246 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:13.246 b_sanitize : address 00:02:13.246 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:02:13.246 c_link_args : 00:02:13.246 cpu_instruction_set: native 00:02:13.246 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:13.246 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:13.246 enable_docs : false 00:02:13.246 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:13.246 enable_kmods : false 00:02:13.246 tests : false 00:02:13.246 00:02:13.246 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:13.505 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:13.505 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:13.505 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:13.505 [3/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:13.505 [4/264] Linking static target lib/librte_kvargs.a 00:02:13.505 [5/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:13.505 [6/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:13.505 [7/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:13.505 [8/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:13.505 [9/264] Linking static target lib/librte_log.a 00:02:13.505 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:13.763 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:13.763 [12/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:13.763 [13/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:14.021 [14/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.021 [15/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:14.021 [16/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:14.022 [17/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:14.022 [18/264] Linking static target lib/librte_telemetry.a 00:02:14.022 [19/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:14.022 [20/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:14.022 [21/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:14.022 [22/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:14.280 [23/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:14.280 [24/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:14.280 [25/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.538 [26/264] Linking target lib/librte_log.so.24.0 00:02:14.538 [27/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:14.538 [28/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:14.538 [29/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:14.538 [30/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:14.538 [31/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:14.538 [32/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:14.538 [33/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:14.538 [34/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.538 [35/264] Linking target lib/librte_kvargs.so.24.0 00:02:14.796 [36/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:14.796 [37/264] Linking target lib/librte_telemetry.so.24.0 00:02:14.796 [38/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:14.796 [39/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:14.796 [40/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:14.796 [41/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:14.796 [42/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:14.796 [43/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:14.796 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:14.796 [45/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:14.796 [46/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:15.054 [47/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:15.054 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:15.054 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:15.313 [50/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:15.313 [51/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:15.313 [52/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:15.313 [53/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:15.313 [54/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:15.313 [55/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:15.313 [56/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:15.313 [57/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:15.313 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:15.313 [59/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:15.313 [60/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:15.313 [61/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:15.313 [62/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:15.572 [63/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:15.572 [64/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:15.572 [65/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:15.572 [66/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:15.572 [67/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:15.572 [68/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:15.572 [69/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:15.572 [70/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:15.830 [71/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:15.830 [72/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:15.830 [73/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:15.830 [74/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:15.830 [75/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:15.830 [76/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:15.830 [77/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:15.830 [78/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:15.830 [79/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:15.830 [80/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:15.830 [81/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:16.089 [82/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:16.089 [83/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:16.089 [84/264] Linking static target lib/librte_ring.a 00:02:16.089 [85/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:16.089 [86/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:16.089 [87/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:16.348 [88/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:16.348 [89/264] Linking static target lib/librte_rcu.a 00:02:16.348 [90/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:16.348 [91/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:16.348 [92/264] Linking static target lib/librte_eal.a 00:02:16.348 [93/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.348 [94/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:16.606 [95/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:16.606 [96/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:16.606 [97/264] Linking static target lib/librte_mempool.a 00:02:16.606 [98/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:16.606 [99/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:16.606 [100/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:16.606 [101/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:16.606 [102/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.606 [103/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:16.864 [104/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:16.864 [105/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:16.864 [106/264] Linking static target lib/librte_net.a 00:02:16.864 [107/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:16.864 [108/264] Linking static target lib/librte_meter.a 00:02:16.864 [109/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:16.864 [110/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:16.864 [111/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:16.864 [112/264] Linking static target lib/librte_mbuf.a 00:02:17.122 [113/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:17.122 [114/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.122 [115/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:17.122 [116/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.380 [117/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:17.380 [118/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.380 [119/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:17.638 [120/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:17.638 [121/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:17.638 [122/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:17.896 [123/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.896 [124/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:17.896 [125/264] Linking static target lib/librte_pci.a 00:02:17.896 [126/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:17.896 [127/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:17.896 [128/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:17.896 [129/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:17.896 [130/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:17.896 [131/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:17.897 [132/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:17.897 [133/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:18.155 [134/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:18.155 [135/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:18.155 [136/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:18.155 [137/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:18.155 [138/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:18.155 [139/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.155 [140/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:18.155 [141/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:18.155 [142/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:18.155 [143/264] Linking static target lib/librte_cmdline.a 00:02:18.155 [144/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:18.413 [145/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:18.413 [146/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:18.413 [147/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:18.413 [148/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:18.671 [149/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:18.671 [150/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:18.671 [151/264] Linking static target lib/librte_timer.a 00:02:18.671 [152/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:18.671 [153/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:18.671 [154/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:18.671 [155/264] Linking static target lib/librte_ethdev.a 00:02:18.671 [156/264] Linking static target lib/librte_compressdev.a 00:02:18.671 [157/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:18.930 [158/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:18.930 [159/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:18.930 [160/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:18.930 [161/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.930 [162/264] Linking static target lib/librte_hash.a 00:02:18.930 [163/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:18.930 [164/264] Linking static target lib/librte_dmadev.a 00:02:18.930 [165/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:19.188 [166/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:19.188 [167/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:19.188 [168/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.188 [169/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.188 [170/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:19.188 [171/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:19.188 [172/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.446 [173/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:19.446 [174/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:19.446 [175/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:19.446 [176/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:19.446 [177/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:19.446 [178/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:19.446 [179/264] Linking static target lib/librte_cryptodev.a 00:02:19.446 [180/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.705 [181/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:19.705 [182/264] Linking static target lib/librte_power.a 00:02:19.705 [183/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:19.705 [184/264] Linking static target lib/librte_reorder.a 00:02:19.705 [185/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:19.705 [186/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:19.705 [187/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:19.963 [188/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.963 [189/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:19.963 [190/264] Linking static target lib/librte_security.a 00:02:19.963 [191/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:20.221 [192/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.222 [193/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:20.222 [194/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:20.222 [195/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.479 [196/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:20.479 [197/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:20.479 [198/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:20.479 [199/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:20.479 [200/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:20.480 [201/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:20.480 [202/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.755 [203/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:20.755 [204/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:20.755 [205/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:20.755 [206/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:20.756 [207/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:20.756 [208/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:20.756 [209/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:20.756 [210/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:21.014 [211/264] Linking static target drivers/librte_bus_vdev.a 00:02:21.014 [212/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:21.014 [213/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:21.014 [214/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:21.014 [215/264] Linking static target drivers/librte_bus_pci.a 00:02:21.014 [216/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:21.014 [217/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:21.014 [218/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.014 [219/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:21.014 [220/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:21.014 [221/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:21.014 [222/264] Linking static target drivers/librte_mempool_ring.a 00:02:21.272 [223/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.841 [224/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:22.776 [225/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.776 [226/264] Linking target lib/librte_eal.so.24.0 00:02:22.776 [227/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:23.055 [228/264] Linking target lib/librte_ring.so.24.0 00:02:23.055 [229/264] Linking target lib/librte_pci.so.24.0 00:02:23.055 [230/264] Linking target lib/librte_dmadev.so.24.0 00:02:23.055 [231/264] Linking target lib/librte_timer.so.24.0 00:02:23.055 [232/264] Linking target lib/librte_meter.so.24.0 00:02:23.055 [233/264] Linking target drivers/librte_bus_vdev.so.24.0 00:02:23.055 [234/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:23.055 [235/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:23.055 [236/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:23.055 [237/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:23.055 [238/264] Linking target drivers/librte_bus_pci.so.24.0 00:02:23.055 [239/264] Linking target lib/librte_mempool.so.24.0 00:02:23.055 [240/264] Linking target lib/librte_rcu.so.24.0 00:02:23.055 [241/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:23.359 [242/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:23.359 [243/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:23.359 [244/264] Linking target drivers/librte_mempool_ring.so.24.0 00:02:23.359 [245/264] Linking target lib/librte_mbuf.so.24.0 00:02:23.359 [246/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.359 [247/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:23.359 [248/264] Linking target lib/librte_reorder.so.24.0 00:02:23.359 [249/264] Linking target lib/librte_cryptodev.so.24.0 00:02:23.359 [250/264] Linking target lib/librte_compressdev.so.24.0 00:02:23.359 [251/264] Linking target lib/librte_net.so.24.0 00:02:23.359 [252/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:23.617 [253/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:23.617 [254/264] Linking target lib/librte_cmdline.so.24.0 00:02:23.617 [255/264] Linking target lib/librte_hash.so.24.0 00:02:23.617 [256/264] Linking target lib/librte_security.so.24.0 00:02:23.618 [257/264] Linking target lib/librte_ethdev.so.24.0 00:02:23.618 [258/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:23.618 [259/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:23.618 [260/264] Linking target lib/librte_power.so.24.0 00:02:24.184 [261/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:24.443 [262/264] Linking static target lib/librte_vhost.a 00:02:25.825 [263/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.825 [264/264] Linking target lib/librte_vhost.so.24.0 00:02:25.825 INFO: autodetecting backend as ninja 00:02:25.825 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:26.764 CC lib/ut_mock/mock.o 00:02:26.764 CC lib/log/log_flags.o 00:02:26.764 CC lib/log/log.o 00:02:26.764 CC lib/log/log_deprecated.o 00:02:26.764 CC lib/ut/ut.o 00:02:26.764 LIB libspdk_ut_mock.a 00:02:26.764 LIB libspdk_log.a 00:02:26.764 SO libspdk_ut_mock.so.5.0 00:02:26.764 LIB libspdk_ut.a 00:02:26.764 SO libspdk_log.so.6.1 00:02:26.764 SO libspdk_ut.so.1.0 00:02:26.764 SYMLINK libspdk_ut_mock.so 00:02:27.022 SYMLINK libspdk_log.so 00:02:27.022 SYMLINK libspdk_ut.so 00:02:27.022 CC lib/dma/dma.o 00:02:27.022 CC lib/util/base64.o 00:02:27.022 CC lib/ioat/ioat.o 00:02:27.022 CC lib/util/bit_array.o 00:02:27.022 CC lib/util/crc16.o 00:02:27.022 CC lib/util/cpuset.o 00:02:27.022 CC lib/util/crc32.o 00:02:27.022 CC lib/util/crc32c.o 00:02:27.022 CXX lib/trace_parser/trace.o 00:02:27.022 CC lib/vfio_user/host/vfio_user_pci.o 00:02:27.022 CC lib/vfio_user/host/vfio_user.o 00:02:27.022 CC lib/util/crc32_ieee.o 00:02:27.022 CC lib/util/crc64.o 00:02:27.022 CC lib/util/dif.o 00:02:27.281 LIB libspdk_dma.a 00:02:27.281 CC lib/util/fd.o 00:02:27.281 SO libspdk_dma.so.3.0 00:02:27.281 CC lib/util/file.o 00:02:27.281 CC lib/util/hexlify.o 00:02:27.281 CC lib/util/iov.o 00:02:27.281 SYMLINK libspdk_dma.so 00:02:27.281 CC lib/util/math.o 00:02:27.281 LIB libspdk_ioat.a 00:02:27.281 CC lib/util/pipe.o 00:02:27.281 SO libspdk_ioat.so.6.0 00:02:27.281 CC lib/util/strerror_tls.o 00:02:27.281 LIB libspdk_vfio_user.a 00:02:27.281 CC lib/util/string.o 00:02:27.281 SO libspdk_vfio_user.so.4.0 00:02:27.281 CC lib/util/uuid.o 00:02:27.281 SYMLINK libspdk_ioat.so 00:02:27.281 CC lib/util/fd_group.o 00:02:27.281 CC lib/util/xor.o 00:02:27.281 CC lib/util/zipf.o 00:02:27.281 SYMLINK libspdk_vfio_user.so 00:02:27.847 LIB libspdk_util.a 00:02:27.847 SO libspdk_util.so.8.0 00:02:27.847 LIB libspdk_trace_parser.a 00:02:27.847 SYMLINK libspdk_util.so 00:02:27.847 SO libspdk_trace_parser.so.4.0 00:02:28.105 CC lib/rdma/common.o 00:02:28.105 CC lib/json/json_parse.o 00:02:28.105 CC lib/json/json_util.o 00:02:28.105 CC lib/idxd/idxd.o 00:02:28.105 CC lib/idxd/idxd_user.o 00:02:28.105 CC lib/rdma/rdma_verbs.o 00:02:28.105 SYMLINK libspdk_trace_parser.so 00:02:28.105 CC lib/conf/conf.o 00:02:28.105 CC lib/vmd/vmd.o 00:02:28.105 CC lib/env_dpdk/env.o 00:02:28.105 CC lib/json/json_write.o 00:02:28.105 CC lib/env_dpdk/memory.o 00:02:28.105 CC lib/env_dpdk/pci.o 00:02:28.105 LIB libspdk_conf.a 00:02:28.105 CC lib/idxd/idxd_kernel.o 00:02:28.105 CC lib/env_dpdk/init.o 00:02:28.362 SO libspdk_conf.so.5.0 00:02:28.362 LIB libspdk_rdma.a 00:02:28.362 LIB libspdk_json.a 00:02:28.362 SO libspdk_rdma.so.5.0 00:02:28.362 SYMLINK libspdk_conf.so 00:02:28.362 SO libspdk_json.so.5.1 00:02:28.362 CC lib/vmd/led.o 00:02:28.362 SYMLINK libspdk_rdma.so 00:02:28.363 CC lib/env_dpdk/threads.o 00:02:28.363 SYMLINK libspdk_json.so 00:02:28.363 CC lib/env_dpdk/pci_ioat.o 00:02:28.363 CC lib/jsonrpc/jsonrpc_server.o 00:02:28.363 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:28.363 CC lib/env_dpdk/pci_virtio.o 00:02:28.363 CC lib/jsonrpc/jsonrpc_client.o 00:02:28.621 CC lib/env_dpdk/pci_vmd.o 00:02:28.621 CC lib/env_dpdk/pci_idxd.o 00:02:28.621 CC lib/env_dpdk/pci_event.o 00:02:28.621 LIB libspdk_idxd.a 00:02:28.621 SO libspdk_idxd.so.11.0 00:02:28.621 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:28.621 CC lib/env_dpdk/sigbus_handler.o 00:02:28.621 CC lib/env_dpdk/pci_dpdk.o 00:02:28.621 LIB libspdk_vmd.a 00:02:28.621 SYMLINK libspdk_idxd.so 00:02:28.621 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:28.621 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:28.621 SO libspdk_vmd.so.5.0 00:02:28.621 SYMLINK libspdk_vmd.so 00:02:28.879 LIB libspdk_jsonrpc.a 00:02:28.879 SO libspdk_jsonrpc.so.5.1 00:02:28.879 SYMLINK libspdk_jsonrpc.so 00:02:29.138 CC lib/rpc/rpc.o 00:02:29.138 LIB libspdk_rpc.a 00:02:29.396 SO libspdk_rpc.so.5.0 00:02:29.396 SYMLINK libspdk_rpc.so 00:02:29.396 LIB libspdk_env_dpdk.a 00:02:29.396 CC lib/notify/notify.o 00:02:29.396 SO libspdk_env_dpdk.so.13.0 00:02:29.396 CC lib/notify/notify_rpc.o 00:02:29.396 CC lib/sock/sock.o 00:02:29.396 CC lib/sock/sock_rpc.o 00:02:29.396 CC lib/trace/trace.o 00:02:29.396 CC lib/trace/trace_flags.o 00:02:29.396 CC lib/trace/trace_rpc.o 00:02:29.654 SYMLINK libspdk_env_dpdk.so 00:02:29.654 LIB libspdk_notify.a 00:02:29.654 SO libspdk_notify.so.5.0 00:02:29.654 SYMLINK libspdk_notify.so 00:02:29.654 LIB libspdk_trace.a 00:02:29.654 SO libspdk_trace.so.9.0 00:02:29.654 SYMLINK libspdk_trace.so 00:02:29.912 LIB libspdk_sock.a 00:02:29.912 SO libspdk_sock.so.8.0 00:02:29.912 SYMLINK libspdk_sock.so 00:02:29.912 CC lib/thread/iobuf.o 00:02:29.912 CC lib/thread/thread.o 00:02:30.169 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:30.169 CC lib/nvme/nvme_ctrlr.o 00:02:30.169 CC lib/nvme/nvme_fabric.o 00:02:30.169 CC lib/nvme/nvme_ns_cmd.o 00:02:30.169 CC lib/nvme/nvme_pcie_common.o 00:02:30.169 CC lib/nvme/nvme_qpair.o 00:02:30.169 CC lib/nvme/nvme_ns.o 00:02:30.170 CC lib/nvme/nvme_pcie.o 00:02:30.170 CC lib/nvme/nvme.o 00:02:30.736 CC lib/nvme/nvme_quirks.o 00:02:30.736 CC lib/nvme/nvme_transport.o 00:02:30.736 CC lib/nvme/nvme_discovery.o 00:02:30.736 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:30.736 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:30.736 CC lib/nvme/nvme_tcp.o 00:02:30.994 CC lib/nvme/nvme_opal.o 00:02:30.994 CC lib/nvme/nvme_io_msg.o 00:02:30.994 CC lib/nvme/nvme_poll_group.o 00:02:31.252 CC lib/nvme/nvme_zns.o 00:02:31.252 CC lib/nvme/nvme_cuse.o 00:02:31.252 CC lib/nvme/nvme_vfio_user.o 00:02:31.252 CC lib/nvme/nvme_rdma.o 00:02:31.252 LIB libspdk_thread.a 00:02:31.510 SO libspdk_thread.so.9.0 00:02:31.510 SYMLINK libspdk_thread.so 00:02:31.510 CC lib/accel/accel.o 00:02:31.510 CC lib/blob/blobstore.o 00:02:31.510 CC lib/accel/accel_rpc.o 00:02:31.510 CC lib/init/json_config.o 00:02:31.769 CC lib/virtio/virtio.o 00:02:31.769 CC lib/virtio/virtio_vhost_user.o 00:02:31.769 CC lib/virtio/virtio_vfio_user.o 00:02:31.769 CC lib/virtio/virtio_pci.o 00:02:31.769 CC lib/init/subsystem.o 00:02:32.028 CC lib/accel/accel_sw.o 00:02:32.028 CC lib/blob/request.o 00:02:32.028 CC lib/init/subsystem_rpc.o 00:02:32.028 CC lib/blob/zeroes.o 00:02:32.028 CC lib/init/rpc.o 00:02:32.028 LIB libspdk_virtio.a 00:02:32.028 CC lib/blob/blob_bs_dev.o 00:02:32.028 SO libspdk_virtio.so.6.0 00:02:32.028 SYMLINK libspdk_virtio.so 00:02:32.286 LIB libspdk_init.a 00:02:32.286 SO libspdk_init.so.4.0 00:02:32.286 SYMLINK libspdk_init.so 00:02:32.286 CC lib/event/app.o 00:02:32.286 CC lib/event/log_rpc.o 00:02:32.286 CC lib/event/reactor.o 00:02:32.286 CC lib/event/scheduler_static.o 00:02:32.286 CC lib/event/app_rpc.o 00:02:32.545 LIB libspdk_accel.a 00:02:32.545 SO libspdk_accel.so.14.0 00:02:32.545 SYMLINK libspdk_accel.so 00:02:32.804 LIB libspdk_nvme.a 00:02:32.804 LIB libspdk_event.a 00:02:32.804 CC lib/bdev/bdev.o 00:02:32.804 CC lib/bdev/bdev_rpc.o 00:02:32.804 CC lib/bdev/part.o 00:02:32.804 CC lib/bdev/scsi_nvme.o 00:02:32.804 CC lib/bdev/bdev_zone.o 00:02:32.804 SO libspdk_nvme.so.12.0 00:02:32.804 SO libspdk_event.so.12.0 00:02:32.804 SYMLINK libspdk_event.so 00:02:33.065 SYMLINK libspdk_nvme.so 00:02:34.465 LIB libspdk_blob.a 00:02:34.465 SO libspdk_blob.so.10.1 00:02:34.724 SYMLINK libspdk_blob.so 00:02:34.724 CC lib/lvol/lvol.o 00:02:34.724 CC lib/blobfs/blobfs.o 00:02:34.724 CC lib/blobfs/tree.o 00:02:34.982 LIB libspdk_bdev.a 00:02:34.983 SO libspdk_bdev.so.14.0 00:02:34.983 SYMLINK libspdk_bdev.so 00:02:35.246 CC lib/nvmf/ctrlr.o 00:02:35.246 CC lib/nvmf/ctrlr_bdev.o 00:02:35.246 CC lib/nvmf/subsystem.o 00:02:35.246 CC lib/nvmf/ctrlr_discovery.o 00:02:35.246 CC lib/ublk/ublk.o 00:02:35.246 CC lib/nbd/nbd.o 00:02:35.246 CC lib/ftl/ftl_core.o 00:02:35.246 CC lib/scsi/dev.o 00:02:35.504 CC lib/scsi/lun.o 00:02:35.504 CC lib/nbd/nbd_rpc.o 00:02:35.504 CC lib/ftl/ftl_init.o 00:02:35.504 CC lib/scsi/port.o 00:02:35.504 LIB libspdk_nbd.a 00:02:35.504 LIB libspdk_blobfs.a 00:02:35.763 CC lib/ftl/ftl_layout.o 00:02:35.763 SO libspdk_nbd.so.6.0 00:02:35.763 SO libspdk_blobfs.so.9.0 00:02:35.763 SYMLINK libspdk_nbd.so 00:02:35.763 CC lib/ftl/ftl_debug.o 00:02:35.763 CC lib/ublk/ublk_rpc.o 00:02:35.763 SYMLINK libspdk_blobfs.so 00:02:35.763 CC lib/ftl/ftl_io.o 00:02:35.763 CC lib/scsi/scsi.o 00:02:35.763 LIB libspdk_lvol.a 00:02:35.763 SO libspdk_lvol.so.9.1 00:02:35.763 CC lib/scsi/scsi_bdev.o 00:02:35.763 SYMLINK libspdk_lvol.so 00:02:35.763 CC lib/scsi/scsi_pr.o 00:02:35.763 LIB libspdk_ublk.a 00:02:35.763 CC lib/ftl/ftl_sb.o 00:02:35.763 CC lib/ftl/ftl_l2p.o 00:02:35.763 CC lib/ftl/ftl_l2p_flat.o 00:02:35.763 SO libspdk_ublk.so.2.0 00:02:36.021 CC lib/scsi/scsi_rpc.o 00:02:36.021 SYMLINK libspdk_ublk.so 00:02:36.021 CC lib/scsi/task.o 00:02:36.021 CC lib/ftl/ftl_nv_cache.o 00:02:36.021 CC lib/ftl/ftl_band.o 00:02:36.021 CC lib/ftl/ftl_band_ops.o 00:02:36.021 CC lib/ftl/ftl_writer.o 00:02:36.021 CC lib/ftl/ftl_rq.o 00:02:36.021 CC lib/ftl/ftl_reloc.o 00:02:36.021 CC lib/ftl/ftl_l2p_cache.o 00:02:36.279 CC lib/ftl/ftl_p2l.o 00:02:36.279 CC lib/ftl/mngt/ftl_mngt.o 00:02:36.279 CC lib/nvmf/nvmf.o 00:02:36.279 LIB libspdk_scsi.a 00:02:36.279 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:36.279 SO libspdk_scsi.so.8.0 00:02:36.279 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:36.279 SYMLINK libspdk_scsi.so 00:02:36.279 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:36.538 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:36.538 CC lib/nvmf/nvmf_rpc.o 00:02:36.538 CC lib/nvmf/transport.o 00:02:36.538 CC lib/nvmf/tcp.o 00:02:36.538 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:36.538 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:36.538 CC lib/nvmf/rdma.o 00:02:36.797 CC lib/iscsi/conn.o 00:02:36.797 CC lib/iscsi/init_grp.o 00:02:36.797 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:36.797 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:36.797 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:37.055 CC lib/iscsi/iscsi.o 00:02:37.055 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:37.055 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:37.055 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:37.055 CC lib/ftl/utils/ftl_conf.o 00:02:37.055 CC lib/ftl/utils/ftl_md.o 00:02:37.055 CC lib/ftl/utils/ftl_mempool.o 00:02:37.313 CC lib/ftl/utils/ftl_bitmap.o 00:02:37.313 CC lib/ftl/utils/ftl_property.o 00:02:37.313 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:37.313 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:37.313 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:37.313 CC lib/vhost/vhost.o 00:02:37.313 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:37.572 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:37.572 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:37.572 CC lib/vhost/vhost_rpc.o 00:02:37.572 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:37.572 CC lib/iscsi/md5.o 00:02:37.572 CC lib/iscsi/param.o 00:02:37.572 CC lib/iscsi/portal_grp.o 00:02:37.572 CC lib/iscsi/tgt_node.o 00:02:37.572 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:37.830 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:37.830 CC lib/iscsi/iscsi_subsystem.o 00:02:37.830 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:37.830 CC lib/vhost/vhost_scsi.o 00:02:37.830 CC lib/iscsi/iscsi_rpc.o 00:02:38.089 CC lib/iscsi/task.o 00:02:38.089 CC lib/ftl/base/ftl_base_dev.o 00:02:38.089 CC lib/vhost/vhost_blk.o 00:02:38.089 CC lib/vhost/rte_vhost_user.o 00:02:38.089 CC lib/ftl/base/ftl_base_bdev.o 00:02:38.089 CC lib/ftl/ftl_trace.o 00:02:38.347 LIB libspdk_iscsi.a 00:02:38.347 LIB libspdk_ftl.a 00:02:38.347 LIB libspdk_nvmf.a 00:02:38.347 SO libspdk_iscsi.so.7.0 00:02:38.347 SO libspdk_ftl.so.8.0 00:02:38.347 SO libspdk_nvmf.so.17.0 00:02:38.347 SYMLINK libspdk_iscsi.so 00:02:38.605 SYMLINK libspdk_nvmf.so 00:02:38.605 SYMLINK libspdk_ftl.so 00:02:38.864 LIB libspdk_vhost.a 00:02:38.864 SO libspdk_vhost.so.7.1 00:02:38.864 SYMLINK libspdk_vhost.so 00:02:39.122 CC module/env_dpdk/env_dpdk_rpc.o 00:02:39.122 CC module/accel/dsa/accel_dsa.o 00:02:39.122 CC module/accel/error/accel_error.o 00:02:39.122 CC module/blob/bdev/blob_bdev.o 00:02:39.122 CC module/accel/iaa/accel_iaa.o 00:02:39.122 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:39.122 CC module/accel/ioat/accel_ioat.o 00:02:39.122 CC module/scheduler/gscheduler/gscheduler.o 00:02:39.122 CC module/sock/posix/posix.o 00:02:39.122 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:39.122 LIB libspdk_env_dpdk_rpc.a 00:02:39.380 SO libspdk_env_dpdk_rpc.so.5.0 00:02:39.380 LIB libspdk_scheduler_dpdk_governor.a 00:02:39.380 SYMLINK libspdk_env_dpdk_rpc.so 00:02:39.380 CC module/accel/ioat/accel_ioat_rpc.o 00:02:39.380 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:39.380 CC module/accel/error/accel_error_rpc.o 00:02:39.380 CC module/accel/iaa/accel_iaa_rpc.o 00:02:39.380 LIB libspdk_scheduler_gscheduler.a 00:02:39.380 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:39.380 CC module/accel/dsa/accel_dsa_rpc.o 00:02:39.380 LIB libspdk_scheduler_dynamic.a 00:02:39.380 SO libspdk_scheduler_gscheduler.so.3.0 00:02:39.380 LIB libspdk_blob_bdev.a 00:02:39.380 SO libspdk_scheduler_dynamic.so.3.0 00:02:39.380 SO libspdk_blob_bdev.so.10.1 00:02:39.380 SYMLINK libspdk_scheduler_gscheduler.so 00:02:39.380 SYMLINK libspdk_scheduler_dynamic.so 00:02:39.380 LIB libspdk_accel_ioat.a 00:02:39.380 SYMLINK libspdk_blob_bdev.so 00:02:39.380 SO libspdk_accel_ioat.so.5.0 00:02:39.380 LIB libspdk_accel_iaa.a 00:02:39.380 LIB libspdk_accel_error.a 00:02:39.380 SO libspdk_accel_iaa.so.2.0 00:02:39.380 LIB libspdk_accel_dsa.a 00:02:39.381 SO libspdk_accel_error.so.1.0 00:02:39.381 SYMLINK libspdk_accel_ioat.so 00:02:39.381 SO libspdk_accel_dsa.so.4.0 00:02:39.638 SYMLINK libspdk_accel_iaa.so 00:02:39.638 SYMLINK libspdk_accel_error.so 00:02:39.638 SYMLINK libspdk_accel_dsa.so 00:02:39.638 CC module/bdev/delay/vbdev_delay.o 00:02:39.638 CC module/bdev/error/vbdev_error.o 00:02:39.638 CC module/bdev/malloc/bdev_malloc.o 00:02:39.638 CC module/bdev/gpt/gpt.o 00:02:39.638 CC module/bdev/lvol/vbdev_lvol.o 00:02:39.638 CC module/blobfs/bdev/blobfs_bdev.o 00:02:39.638 CC module/bdev/null/bdev_null.o 00:02:39.638 CC module/bdev/passthru/vbdev_passthru.o 00:02:39.638 CC module/bdev/nvme/bdev_nvme.o 00:02:39.638 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:39.896 CC module/bdev/gpt/vbdev_gpt.o 00:02:39.896 CC module/bdev/error/vbdev_error_rpc.o 00:02:39.896 CC module/bdev/null/bdev_null_rpc.o 00:02:39.897 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:39.897 LIB libspdk_blobfs_bdev.a 00:02:39.897 SO libspdk_blobfs_bdev.so.5.0 00:02:39.897 LIB libspdk_sock_posix.a 00:02:39.897 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:39.897 SO libspdk_sock_posix.so.5.0 00:02:39.897 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:39.897 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:39.897 LIB libspdk_bdev_error.a 00:02:39.897 SYMLINK libspdk_blobfs_bdev.so 00:02:39.897 LIB libspdk_bdev_passthru.a 00:02:39.897 SO libspdk_bdev_error.so.5.0 00:02:39.897 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:39.897 SYMLINK libspdk_sock_posix.so 00:02:39.897 CC module/bdev/nvme/nvme_rpc.o 00:02:39.897 SO libspdk_bdev_passthru.so.5.0 00:02:40.155 LIB libspdk_bdev_gpt.a 00:02:40.155 LIB libspdk_bdev_null.a 00:02:40.155 SYMLINK libspdk_bdev_error.so 00:02:40.155 SO libspdk_bdev_null.so.5.0 00:02:40.155 SO libspdk_bdev_gpt.so.5.0 00:02:40.155 LIB libspdk_bdev_delay.a 00:02:40.155 SYMLINK libspdk_bdev_passthru.so 00:02:40.155 LIB libspdk_bdev_malloc.a 00:02:40.155 SO libspdk_bdev_delay.so.5.0 00:02:40.155 SYMLINK libspdk_bdev_gpt.so 00:02:40.155 SYMLINK libspdk_bdev_null.so 00:02:40.155 SO libspdk_bdev_malloc.so.5.0 00:02:40.155 CC module/bdev/nvme/bdev_mdns_client.o 00:02:40.155 CC module/bdev/raid/bdev_raid.o 00:02:40.155 CC module/bdev/split/vbdev_split.o 00:02:40.155 SYMLINK libspdk_bdev_delay.so 00:02:40.155 CC module/bdev/split/vbdev_split_rpc.o 00:02:40.155 SYMLINK libspdk_bdev_malloc.so 00:02:40.155 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:40.155 CC module/bdev/nvme/vbdev_opal.o 00:02:40.155 LIB libspdk_bdev_lvol.a 00:02:40.155 CC module/bdev/xnvme/bdev_xnvme.o 00:02:40.413 SO libspdk_bdev_lvol.so.5.0 00:02:40.413 CC module/bdev/aio/bdev_aio.o 00:02:40.413 SYMLINK libspdk_bdev_lvol.so 00:02:40.413 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:40.413 LIB libspdk_bdev_split.a 00:02:40.413 SO libspdk_bdev_split.so.5.0 00:02:40.413 CC module/bdev/ftl/bdev_ftl.o 00:02:40.413 SYMLINK libspdk_bdev_split.so 00:02:40.413 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:40.413 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:40.413 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:02:40.413 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:40.672 LIB libspdk_bdev_zone_block.a 00:02:40.672 CC module/bdev/aio/bdev_aio_rpc.o 00:02:40.672 SO libspdk_bdev_zone_block.so.5.0 00:02:40.672 CC module/bdev/iscsi/bdev_iscsi.o 00:02:40.672 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:40.672 CC module/bdev/raid/bdev_raid_rpc.o 00:02:40.672 LIB libspdk_bdev_xnvme.a 00:02:40.672 SYMLINK libspdk_bdev_zone_block.so 00:02:40.672 CC module/bdev/raid/bdev_raid_sb.o 00:02:40.672 SO libspdk_bdev_xnvme.so.2.0 00:02:40.672 LIB libspdk_bdev_ftl.a 00:02:40.672 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:40.672 SO libspdk_bdev_ftl.so.5.0 00:02:40.672 SYMLINK libspdk_bdev_xnvme.so 00:02:40.672 LIB libspdk_bdev_aio.a 00:02:40.672 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:40.672 SYMLINK libspdk_bdev_ftl.so 00:02:40.672 CC module/bdev/raid/raid0.o 00:02:40.672 SO libspdk_bdev_aio.so.5.0 00:02:40.931 CC module/bdev/raid/raid1.o 00:02:40.931 SYMLINK libspdk_bdev_aio.so 00:02:40.931 CC module/bdev/raid/concat.o 00:02:40.931 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:40.931 LIB libspdk_bdev_iscsi.a 00:02:40.931 SO libspdk_bdev_iscsi.so.5.0 00:02:40.931 SYMLINK libspdk_bdev_iscsi.so 00:02:40.931 LIB libspdk_bdev_raid.a 00:02:41.189 SO libspdk_bdev_raid.so.5.0 00:02:41.189 SYMLINK libspdk_bdev_raid.so 00:02:41.189 LIB libspdk_bdev_virtio.a 00:02:41.189 SO libspdk_bdev_virtio.so.5.0 00:02:41.448 SYMLINK libspdk_bdev_virtio.so 00:02:41.448 LIB libspdk_bdev_nvme.a 00:02:41.448 SO libspdk_bdev_nvme.so.6.0 00:02:41.706 SYMLINK libspdk_bdev_nvme.so 00:02:41.965 CC module/event/subsystems/scheduler/scheduler.o 00:02:41.965 CC module/event/subsystems/vmd/vmd.o 00:02:41.965 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:41.965 CC module/event/subsystems/sock/sock.o 00:02:41.965 CC module/event/subsystems/iobuf/iobuf.o 00:02:41.965 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:41.965 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:41.965 LIB libspdk_event_sock.a 00:02:41.965 LIB libspdk_event_vhost_blk.a 00:02:41.965 LIB libspdk_event_scheduler.a 00:02:41.965 LIB libspdk_event_vmd.a 00:02:41.965 SO libspdk_event_sock.so.4.0 00:02:41.965 SO libspdk_event_vhost_blk.so.2.0 00:02:41.965 LIB libspdk_event_iobuf.a 00:02:41.965 SO libspdk_event_scheduler.so.3.0 00:02:41.965 SO libspdk_event_vmd.so.5.0 00:02:41.965 SO libspdk_event_iobuf.so.2.0 00:02:41.965 SYMLINK libspdk_event_vhost_blk.so 00:02:41.965 SYMLINK libspdk_event_sock.so 00:02:41.965 SYMLINK libspdk_event_scheduler.so 00:02:41.965 SYMLINK libspdk_event_vmd.so 00:02:41.965 SYMLINK libspdk_event_iobuf.so 00:02:42.223 CC module/event/subsystems/accel/accel.o 00:02:42.482 LIB libspdk_event_accel.a 00:02:42.482 SO libspdk_event_accel.so.5.0 00:02:42.482 SYMLINK libspdk_event_accel.so 00:02:42.482 CC module/event/subsystems/bdev/bdev.o 00:02:42.741 LIB libspdk_event_bdev.a 00:02:42.741 SO libspdk_event_bdev.so.5.0 00:02:42.741 SYMLINK libspdk_event_bdev.so 00:02:43.000 CC module/event/subsystems/scsi/scsi.o 00:02:43.000 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:43.001 CC module/event/subsystems/nbd/nbd.o 00:02:43.001 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:43.001 CC module/event/subsystems/ublk/ublk.o 00:02:43.001 LIB libspdk_event_scsi.a 00:02:43.001 LIB libspdk_event_ublk.a 00:02:43.001 LIB libspdk_event_nbd.a 00:02:43.001 SO libspdk_event_scsi.so.5.0 00:02:43.001 SO libspdk_event_ublk.so.2.0 00:02:43.001 SO libspdk_event_nbd.so.5.0 00:02:43.001 SYMLINK libspdk_event_scsi.so 00:02:43.001 SYMLINK libspdk_event_ublk.so 00:02:43.001 SYMLINK libspdk_event_nbd.so 00:02:43.001 LIB libspdk_event_nvmf.a 00:02:43.259 SO libspdk_event_nvmf.so.5.0 00:02:43.259 SYMLINK libspdk_event_nvmf.so 00:02:43.259 CC module/event/subsystems/iscsi/iscsi.o 00:02:43.259 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:43.259 LIB libspdk_event_iscsi.a 00:02:43.259 LIB libspdk_event_vhost_scsi.a 00:02:43.259 SO libspdk_event_iscsi.so.5.0 00:02:43.259 SO libspdk_event_vhost_scsi.so.2.0 00:02:43.518 SYMLINK libspdk_event_iscsi.so 00:02:43.518 SYMLINK libspdk_event_vhost_scsi.so 00:02:43.518 SO libspdk.so.5.0 00:02:43.518 SYMLINK libspdk.so 00:02:43.518 CXX app/trace/trace.o 00:02:43.777 CC examples/nvme/hello_world/hello_world.o 00:02:43.777 CC examples/accel/perf/accel_perf.o 00:02:43.777 CC examples/ioat/perf/perf.o 00:02:43.777 CC examples/vmd/lsvmd/lsvmd.o 00:02:43.777 CC examples/sock/hello_world/hello_sock.o 00:02:43.777 CC examples/nvmf/nvmf/nvmf.o 00:02:43.777 CC examples/bdev/hello_world/hello_bdev.o 00:02:43.777 CC test/accel/dif/dif.o 00:02:43.777 CC examples/blob/hello_world/hello_blob.o 00:02:43.777 LINK lsvmd 00:02:43.777 LINK ioat_perf 00:02:43.777 LINK spdk_trace 00:02:43.777 LINK nvmf 00:02:43.777 LINK hello_world 00:02:44.036 LINK hello_sock 00:02:44.036 LINK hello_bdev 00:02:44.036 CC examples/vmd/led/led.o 00:02:44.036 LINK hello_blob 00:02:44.036 CC examples/ioat/verify/verify.o 00:02:44.036 CC app/trace_record/trace_record.o 00:02:44.036 CC examples/nvme/reconnect/reconnect.o 00:02:44.036 LINK led 00:02:44.036 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:44.036 LINK dif 00:02:44.036 LINK accel_perf 00:02:44.036 CC app/nvmf_tgt/nvmf_main.o 00:02:44.036 CC examples/blob/cli/blobcli.o 00:02:44.036 LINK verify 00:02:44.294 CC examples/bdev/bdevperf/bdevperf.o 00:02:44.294 LINK nvmf_tgt 00:02:44.294 LINK spdk_trace_record 00:02:44.294 CC examples/util/zipf/zipf.o 00:02:44.294 LINK reconnect 00:02:44.294 CC examples/idxd/perf/perf.o 00:02:44.294 CC test/app/bdev_svc/bdev_svc.o 00:02:44.294 CC examples/thread/thread/thread_ex.o 00:02:44.553 LINK zipf 00:02:44.553 CC app/iscsi_tgt/iscsi_tgt.o 00:02:44.553 CC app/spdk_tgt/spdk_tgt.o 00:02:44.553 LINK nvme_manage 00:02:44.553 LINK bdev_svc 00:02:44.553 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:44.553 LINK thread 00:02:44.553 CC test/app/histogram_perf/histogram_perf.o 00:02:44.553 LINK spdk_tgt 00:02:44.553 LINK blobcli 00:02:44.553 LINK iscsi_tgt 00:02:44.553 CC examples/nvme/arbitration/arbitration.o 00:02:44.553 LINK idxd_perf 00:02:44.812 CC test/app/jsoncat/jsoncat.o 00:02:44.812 LINK histogram_perf 00:02:44.812 LINK jsoncat 00:02:44.812 TEST_HEADER include/spdk/accel.h 00:02:44.812 TEST_HEADER include/spdk/accel_module.h 00:02:44.812 TEST_HEADER include/spdk/assert.h 00:02:44.812 CC app/spdk_lspci/spdk_lspci.o 00:02:44.812 TEST_HEADER include/spdk/barrier.h 00:02:44.812 TEST_HEADER include/spdk/base64.h 00:02:44.812 TEST_HEADER include/spdk/bdev.h 00:02:44.812 TEST_HEADER include/spdk/bdev_module.h 00:02:44.812 TEST_HEADER include/spdk/bdev_zone.h 00:02:44.812 TEST_HEADER include/spdk/bit_array.h 00:02:44.812 TEST_HEADER include/spdk/bit_pool.h 00:02:44.812 TEST_HEADER include/spdk/blob_bdev.h 00:02:44.812 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:44.812 CC test/bdev/bdevio/bdevio.o 00:02:44.812 TEST_HEADER include/spdk/blobfs.h 00:02:44.812 TEST_HEADER include/spdk/blob.h 00:02:44.812 TEST_HEADER include/spdk/conf.h 00:02:44.812 CC test/app/stub/stub.o 00:02:44.812 TEST_HEADER include/spdk/config.h 00:02:44.812 TEST_HEADER include/spdk/cpuset.h 00:02:44.812 TEST_HEADER include/spdk/crc16.h 00:02:44.812 TEST_HEADER include/spdk/crc32.h 00:02:44.813 TEST_HEADER include/spdk/crc64.h 00:02:44.813 TEST_HEADER include/spdk/dif.h 00:02:44.813 TEST_HEADER include/spdk/dma.h 00:02:44.813 TEST_HEADER include/spdk/endian.h 00:02:44.813 TEST_HEADER include/spdk/env_dpdk.h 00:02:44.813 CC test/blobfs/mkfs/mkfs.o 00:02:44.813 TEST_HEADER include/spdk/env.h 00:02:44.813 TEST_HEADER include/spdk/event.h 00:02:44.813 TEST_HEADER include/spdk/fd_group.h 00:02:44.813 TEST_HEADER include/spdk/fd.h 00:02:44.813 TEST_HEADER include/spdk/file.h 00:02:44.813 TEST_HEADER include/spdk/ftl.h 00:02:44.813 TEST_HEADER include/spdk/gpt_spec.h 00:02:44.813 TEST_HEADER include/spdk/hexlify.h 00:02:44.813 TEST_HEADER include/spdk/histogram_data.h 00:02:44.813 TEST_HEADER include/spdk/idxd.h 00:02:44.813 TEST_HEADER include/spdk/idxd_spec.h 00:02:44.813 TEST_HEADER include/spdk/init.h 00:02:44.813 TEST_HEADER include/spdk/ioat.h 00:02:44.813 TEST_HEADER include/spdk/ioat_spec.h 00:02:44.813 TEST_HEADER include/spdk/iscsi_spec.h 00:02:44.813 TEST_HEADER include/spdk/json.h 00:02:44.813 TEST_HEADER include/spdk/jsonrpc.h 00:02:44.813 TEST_HEADER include/spdk/likely.h 00:02:44.813 TEST_HEADER include/spdk/log.h 00:02:44.813 TEST_HEADER include/spdk/lvol.h 00:02:44.813 TEST_HEADER include/spdk/memory.h 00:02:44.813 TEST_HEADER include/spdk/mmio.h 00:02:44.813 TEST_HEADER include/spdk/nbd.h 00:02:44.813 TEST_HEADER include/spdk/notify.h 00:02:44.813 TEST_HEADER include/spdk/nvme.h 00:02:44.813 TEST_HEADER include/spdk/nvme_intel.h 00:02:44.813 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:44.813 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:44.813 TEST_HEADER include/spdk/nvme_spec.h 00:02:44.813 TEST_HEADER include/spdk/nvme_zns.h 00:02:44.813 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:44.813 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:44.813 TEST_HEADER include/spdk/nvmf.h 00:02:44.813 TEST_HEADER include/spdk/nvmf_spec.h 00:02:44.813 TEST_HEADER include/spdk/nvmf_transport.h 00:02:44.813 TEST_HEADER include/spdk/opal.h 00:02:44.813 TEST_HEADER include/spdk/opal_spec.h 00:02:44.813 TEST_HEADER include/spdk/pci_ids.h 00:02:44.813 TEST_HEADER include/spdk/pipe.h 00:02:44.813 TEST_HEADER include/spdk/queue.h 00:02:44.813 TEST_HEADER include/spdk/reduce.h 00:02:44.813 TEST_HEADER include/spdk/rpc.h 00:02:44.813 TEST_HEADER include/spdk/scheduler.h 00:02:44.813 CC test/dma/test_dma/test_dma.o 00:02:44.813 TEST_HEADER include/spdk/scsi.h 00:02:44.813 TEST_HEADER include/spdk/scsi_spec.h 00:02:44.813 TEST_HEADER include/spdk/sock.h 00:02:44.813 TEST_HEADER include/spdk/stdinc.h 00:02:44.813 TEST_HEADER include/spdk/string.h 00:02:44.813 TEST_HEADER include/spdk/thread.h 00:02:44.813 TEST_HEADER include/spdk/trace.h 00:02:44.813 LINK spdk_lspci 00:02:44.813 TEST_HEADER include/spdk/trace_parser.h 00:02:44.813 TEST_HEADER include/spdk/tree.h 00:02:44.813 LINK bdevperf 00:02:44.813 TEST_HEADER include/spdk/ublk.h 00:02:44.813 TEST_HEADER include/spdk/util.h 00:02:44.813 TEST_HEADER include/spdk/uuid.h 00:02:44.813 TEST_HEADER include/spdk/version.h 00:02:44.813 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:44.813 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:44.813 TEST_HEADER include/spdk/vhost.h 00:02:44.813 TEST_HEADER include/spdk/vmd.h 00:02:45.072 TEST_HEADER include/spdk/xor.h 00:02:45.072 TEST_HEADER include/spdk/zipf.h 00:02:45.072 CXX test/cpp_headers/accel.o 00:02:45.072 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:45.072 LINK nvme_fuzz 00:02:45.072 LINK arbitration 00:02:45.072 LINK mkfs 00:02:45.072 LINK stub 00:02:45.072 CXX test/cpp_headers/accel_module.o 00:02:45.072 CXX test/cpp_headers/assert.o 00:02:45.072 CC app/spdk_nvme_perf/perf.o 00:02:45.072 CC app/spdk_nvme_identify/identify.o 00:02:45.072 CXX test/cpp_headers/barrier.o 00:02:45.072 CXX test/cpp_headers/base64.o 00:02:45.072 LINK bdevio 00:02:45.072 CC examples/nvme/hotplug/hotplug.o 00:02:45.072 CXX test/cpp_headers/bdev.o 00:02:45.357 LINK test_dma 00:02:45.357 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:45.357 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:45.357 CXX test/cpp_headers/bdev_module.o 00:02:45.357 CXX test/cpp_headers/bdev_zone.o 00:02:45.357 CXX test/cpp_headers/bit_array.o 00:02:45.357 CXX test/cpp_headers/bit_pool.o 00:02:45.357 LINK hotplug 00:02:45.357 CXX test/cpp_headers/blob_bdev.o 00:02:45.357 CXX test/cpp_headers/blobfs_bdev.o 00:02:45.357 CXX test/cpp_headers/blobfs.o 00:02:45.615 CXX test/cpp_headers/blob.o 00:02:45.615 CC test/env/mem_callbacks/mem_callbacks.o 00:02:45.615 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:45.615 CC test/env/vtophys/vtophys.o 00:02:45.615 CXX test/cpp_headers/conf.o 00:02:45.615 LINK vhost_fuzz 00:02:45.616 CC test/event/event_perf/event_perf.o 00:02:45.616 LINK cmb_copy 00:02:45.616 LINK vtophys 00:02:45.616 CXX test/cpp_headers/config.o 00:02:45.616 LINK spdk_nvme_perf 00:02:45.616 CXX test/cpp_headers/cpuset.o 00:02:45.616 CC app/spdk_nvme_discover/discovery_aer.o 00:02:45.875 LINK event_perf 00:02:45.875 CXX test/cpp_headers/crc16.o 00:02:45.875 CC test/lvol/esnap/esnap.o 00:02:45.875 CC examples/nvme/abort/abort.o 00:02:45.875 CXX test/cpp_headers/crc32.o 00:02:45.875 CC test/event/reactor/reactor.o 00:02:45.875 LINK spdk_nvme_identify 00:02:45.875 LINK spdk_nvme_discover 00:02:45.875 CC test/event/reactor_perf/reactor_perf.o 00:02:45.875 CC test/event/app_repeat/app_repeat.o 00:02:46.133 LINK reactor 00:02:46.133 CXX test/cpp_headers/crc64.o 00:02:46.133 LINK mem_callbacks 00:02:46.133 LINK reactor_perf 00:02:46.133 CC test/event/scheduler/scheduler.o 00:02:46.133 LINK app_repeat 00:02:46.133 CC app/spdk_top/spdk_top.o 00:02:46.133 CXX test/cpp_headers/dif.o 00:02:46.133 CXX test/cpp_headers/dma.o 00:02:46.133 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:46.133 LINK abort 00:02:46.133 LINK scheduler 00:02:46.133 CC test/nvme/aer/aer.o 00:02:46.133 CXX test/cpp_headers/endian.o 00:02:46.392 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:46.392 LINK env_dpdk_post_init 00:02:46.392 CC test/nvme/reset/reset.o 00:02:46.392 CXX test/cpp_headers/env_dpdk.o 00:02:46.392 CXX test/cpp_headers/env.o 00:02:46.392 CC test/nvme/sgl/sgl.o 00:02:46.392 LINK aer 00:02:46.392 LINK pmr_persistence 00:02:46.392 CXX test/cpp_headers/event.o 00:02:46.392 CC test/env/memory/memory_ut.o 00:02:46.651 CC test/nvme/e2edp/nvme_dp.o 00:02:46.651 CXX test/cpp_headers/fd_group.o 00:02:46.651 LINK reset 00:02:46.651 LINK sgl 00:02:46.651 LINK iscsi_fuzz 00:02:46.651 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:46.651 CXX test/cpp_headers/fd.o 00:02:46.651 CC test/rpc_client/rpc_client_test.o 00:02:46.651 CXX test/cpp_headers/file.o 00:02:46.651 CXX test/cpp_headers/ftl.o 00:02:46.651 CC test/thread/poller_perf/poller_perf.o 00:02:46.651 LINK interrupt_tgt 00:02:46.909 LINK nvme_dp 00:02:46.909 LINK rpc_client_test 00:02:46.909 CC app/vhost/vhost.o 00:02:46.909 CC app/spdk_dd/spdk_dd.o 00:02:46.909 CXX test/cpp_headers/gpt_spec.o 00:02:46.909 LINK poller_perf 00:02:46.909 CXX test/cpp_headers/hexlify.o 00:02:46.909 CXX test/cpp_headers/histogram_data.o 00:02:46.909 CC test/nvme/overhead/overhead.o 00:02:46.909 LINK spdk_top 00:02:46.909 CXX test/cpp_headers/idxd.o 00:02:47.167 CXX test/cpp_headers/idxd_spec.o 00:02:47.167 LINK vhost 00:02:47.167 CC test/nvme/err_injection/err_injection.o 00:02:47.168 CC test/env/pci/pci_ut.o 00:02:47.168 CXX test/cpp_headers/init.o 00:02:47.168 CXX test/cpp_headers/ioat.o 00:02:47.168 LINK memory_ut 00:02:47.168 LINK spdk_dd 00:02:47.168 LINK err_injection 00:02:47.168 CXX test/cpp_headers/ioat_spec.o 00:02:47.168 LINK overhead 00:02:47.168 CXX test/cpp_headers/iscsi_spec.o 00:02:47.168 CC app/fio/nvme/fio_plugin.o 00:02:47.168 CC test/nvme/reserve/reserve.o 00:02:47.168 CC test/nvme/startup/startup.o 00:02:47.426 CXX test/cpp_headers/json.o 00:02:47.426 CC test/nvme/simple_copy/simple_copy.o 00:02:47.426 CXX test/cpp_headers/jsonrpc.o 00:02:47.426 CC test/nvme/connect_stress/connect_stress.o 00:02:47.426 LINK pci_ut 00:02:47.426 CC test/nvme/boot_partition/boot_partition.o 00:02:47.426 LINK startup 00:02:47.426 LINK reserve 00:02:47.426 CXX test/cpp_headers/likely.o 00:02:47.426 CC test/nvme/compliance/nvme_compliance.o 00:02:47.426 LINK boot_partition 00:02:47.426 CXX test/cpp_headers/log.o 00:02:47.426 CXX test/cpp_headers/lvol.o 00:02:47.426 LINK simple_copy 00:02:47.426 LINK connect_stress 00:02:47.685 CC test/nvme/fused_ordering/fused_ordering.o 00:02:47.685 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:47.685 CXX test/cpp_headers/memory.o 00:02:47.685 CXX test/cpp_headers/mmio.o 00:02:47.685 CC test/nvme/fdp/fdp.o 00:02:47.685 LINK spdk_nvme 00:02:47.685 CXX test/cpp_headers/nbd.o 00:02:47.685 CXX test/cpp_headers/notify.o 00:02:47.685 CC test/nvme/cuse/cuse.o 00:02:47.685 LINK nvme_compliance 00:02:47.685 CXX test/cpp_headers/nvme.o 00:02:47.685 LINK doorbell_aers 00:02:47.685 CXX test/cpp_headers/nvme_intel.o 00:02:47.685 LINK fused_ordering 00:02:47.685 CC app/fio/bdev/fio_plugin.o 00:02:47.685 CXX test/cpp_headers/nvme_ocssd.o 00:02:47.943 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:47.943 CXX test/cpp_headers/nvme_spec.o 00:02:47.943 LINK fdp 00:02:47.943 CXX test/cpp_headers/nvme_zns.o 00:02:47.943 CXX test/cpp_headers/nvmf_cmd.o 00:02:47.943 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:47.943 CXX test/cpp_headers/nvmf.o 00:02:47.943 CXX test/cpp_headers/nvmf_spec.o 00:02:47.943 CXX test/cpp_headers/nvmf_transport.o 00:02:47.943 CXX test/cpp_headers/opal.o 00:02:47.943 CXX test/cpp_headers/opal_spec.o 00:02:47.943 CXX test/cpp_headers/pci_ids.o 00:02:48.202 CXX test/cpp_headers/pipe.o 00:02:48.202 CXX test/cpp_headers/queue.o 00:02:48.202 CXX test/cpp_headers/reduce.o 00:02:48.202 CXX test/cpp_headers/rpc.o 00:02:48.202 CXX test/cpp_headers/scheduler.o 00:02:48.202 CXX test/cpp_headers/scsi.o 00:02:48.202 LINK spdk_bdev 00:02:48.202 CXX test/cpp_headers/scsi_spec.o 00:02:48.202 CXX test/cpp_headers/sock.o 00:02:48.202 CXX test/cpp_headers/stdinc.o 00:02:48.202 CXX test/cpp_headers/string.o 00:02:48.202 CXX test/cpp_headers/thread.o 00:02:48.202 CXX test/cpp_headers/trace.o 00:02:48.202 CXX test/cpp_headers/trace_parser.o 00:02:48.202 CXX test/cpp_headers/tree.o 00:02:48.202 CXX test/cpp_headers/ublk.o 00:02:48.202 CXX test/cpp_headers/util.o 00:02:48.202 CXX test/cpp_headers/uuid.o 00:02:48.202 CXX test/cpp_headers/version.o 00:02:48.462 CXX test/cpp_headers/vfio_user_pci.o 00:02:48.462 CXX test/cpp_headers/vfio_user_spec.o 00:02:48.462 CXX test/cpp_headers/vhost.o 00:02:48.462 CXX test/cpp_headers/vmd.o 00:02:48.462 CXX test/cpp_headers/xor.o 00:02:48.462 CXX test/cpp_headers/zipf.o 00:02:48.462 LINK cuse 00:02:49.399 LINK esnap 00:02:49.966 00:02:49.966 real 0m48.204s 00:02:49.966 user 4m49.677s 00:02:49.966 sys 1m1.265s 00:02:49.966 14:03:48 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:49.966 ************************************ 00:02:49.966 END TEST make 00:02:49.966 ************************************ 00:02:49.966 14:03:48 -- common/autotest_common.sh@10 -- $ set +x 00:02:49.966 14:03:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:02:49.966 14:03:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:02:49.966 14:03:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:02:49.966 14:03:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:02:49.966 14:03:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:02:49.966 14:03:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:02:49.966 14:03:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:02:49.966 14:03:48 -- scripts/common.sh@335 -- # IFS=.-: 00:02:49.966 14:03:48 -- scripts/common.sh@335 -- # read -ra ver1 00:02:49.966 14:03:48 -- scripts/common.sh@336 -- # IFS=.-: 00:02:49.966 14:03:48 -- scripts/common.sh@336 -- # read -ra ver2 00:02:49.966 14:03:48 -- scripts/common.sh@337 -- # local 'op=<' 00:02:49.966 14:03:48 -- scripts/common.sh@339 -- # ver1_l=2 00:02:49.966 14:03:48 -- scripts/common.sh@340 -- # ver2_l=1 00:02:49.966 14:03:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:02:49.966 14:03:48 -- scripts/common.sh@343 -- # case "$op" in 00:02:49.966 14:03:48 -- scripts/common.sh@344 -- # : 1 00:02:49.966 14:03:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:02:49.966 14:03:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:49.966 14:03:48 -- scripts/common.sh@364 -- # decimal 1 00:02:49.966 14:03:48 -- scripts/common.sh@352 -- # local d=1 00:02:49.966 14:03:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:49.966 14:03:48 -- scripts/common.sh@354 -- # echo 1 00:02:49.966 14:03:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:02:49.966 14:03:48 -- scripts/common.sh@365 -- # decimal 2 00:02:49.966 14:03:48 -- scripts/common.sh@352 -- # local d=2 00:02:49.966 14:03:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:49.966 14:03:48 -- scripts/common.sh@354 -- # echo 2 00:02:49.966 14:03:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:02:49.966 14:03:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:02:49.966 14:03:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:02:49.966 14:03:48 -- scripts/common.sh@367 -- # return 0 00:02:49.966 14:03:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:49.966 14:03:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:02:49.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:49.966 --rc genhtml_branch_coverage=1 00:02:49.966 --rc genhtml_function_coverage=1 00:02:49.966 --rc genhtml_legend=1 00:02:49.966 --rc geninfo_all_blocks=1 00:02:49.966 --rc geninfo_unexecuted_blocks=1 00:02:49.966 00:02:49.966 ' 00:02:49.966 14:03:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:02:49.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:49.966 --rc genhtml_branch_coverage=1 00:02:49.966 --rc genhtml_function_coverage=1 00:02:49.966 --rc genhtml_legend=1 00:02:49.966 --rc geninfo_all_blocks=1 00:02:49.966 --rc geninfo_unexecuted_blocks=1 00:02:49.966 00:02:49.966 ' 00:02:49.966 14:03:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:02:49.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:49.966 --rc genhtml_branch_coverage=1 00:02:49.966 --rc genhtml_function_coverage=1 00:02:49.966 --rc genhtml_legend=1 00:02:49.966 --rc geninfo_all_blocks=1 00:02:49.966 --rc geninfo_unexecuted_blocks=1 00:02:49.966 00:02:49.966 ' 00:02:49.966 14:03:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:02:49.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:49.966 --rc genhtml_branch_coverage=1 00:02:49.966 --rc genhtml_function_coverage=1 00:02:49.966 --rc genhtml_legend=1 00:02:49.966 --rc geninfo_all_blocks=1 00:02:49.966 --rc geninfo_unexecuted_blocks=1 00:02:49.966 00:02:49.966 ' 00:02:49.967 14:03:48 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:02:49.967 14:03:48 -- nvmf/common.sh@7 -- # uname -s 00:02:49.967 14:03:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:49.967 14:03:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:49.967 14:03:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:49.967 14:03:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:49.967 14:03:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:49.967 14:03:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:49.967 14:03:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:49.967 14:03:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:49.967 14:03:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:49.967 14:03:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:49.967 14:03:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e08bbce-c901-475f-81a4-7c34959d137c 00:02:49.967 14:03:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=1e08bbce-c901-475f-81a4-7c34959d137c 00:02:49.967 14:03:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:49.967 14:03:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:49.967 14:03:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:49.967 14:03:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:49.967 14:03:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:49.967 14:03:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:49.967 14:03:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:49.967 14:03:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:49.967 14:03:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:49.967 14:03:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:49.967 14:03:48 -- paths/export.sh@5 -- # export PATH 00:02:49.967 14:03:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:49.967 14:03:48 -- nvmf/common.sh@46 -- # : 0 00:02:49.967 14:03:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:49.967 14:03:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:49.967 14:03:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:49.967 14:03:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:49.967 14:03:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:49.967 14:03:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:49.967 14:03:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:49.967 14:03:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:49.967 14:03:48 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:49.967 14:03:48 -- spdk/autotest.sh@32 -- # uname -s 00:02:49.967 14:03:48 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:49.967 14:03:48 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:49.967 14:03:48 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:49.967 14:03:48 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:02:49.967 14:03:48 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:49.967 14:03:48 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:49.967 14:03:48 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:49.967 14:03:48 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:49.967 14:03:48 -- spdk/autotest.sh@48 -- # udevadm_pid=48154 00:02:49.967 14:03:48 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:02:49.967 14:03:48 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:49.967 14:03:48 -- spdk/autotest.sh@54 -- # echo 48179 00:02:49.967 14:03:48 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:02:49.967 14:03:48 -- spdk/autotest.sh@56 -- # echo 48180 00:02:49.967 14:03:48 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:02:49.967 14:03:48 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:02:49.967 14:03:48 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:49.967 14:03:48 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:49.967 14:03:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:49.967 14:03:48 -- common/autotest_common.sh@10 -- # set +x 00:02:49.967 14:03:48 -- spdk/autotest.sh@70 -- # create_test_list 00:02:49.967 14:03:48 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:49.967 14:03:48 -- common/autotest_common.sh@10 -- # set +x 00:02:50.228 14:03:48 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:02:50.228 14:03:48 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:02:50.228 14:03:48 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:02:50.228 14:03:48 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:02:50.228 14:03:48 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:02:50.228 14:03:48 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:50.228 14:03:48 -- common/autotest_common.sh@1450 -- # uname 00:02:50.228 14:03:48 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:02:50.228 14:03:48 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:50.228 14:03:48 -- common/autotest_common.sh@1470 -- # uname 00:02:50.228 14:03:48 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:02:50.228 14:03:48 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:02:50.228 14:03:48 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:50.228 lcov: LCOV version 1.15 00:02:50.228 14:03:48 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:02:56.812 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:56.812 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:56.812 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:56.812 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:56.812 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:56.812 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:18.784 14:04:13 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:03:18.784 14:04:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:18.784 14:04:13 -- common/autotest_common.sh@10 -- # set +x 00:03:18.784 14:04:13 -- spdk/autotest.sh@89 -- # rm -f 00:03:18.784 14:04:13 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:18.784 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:18.784 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:03:18.784 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:03:18.784 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:03:18.784 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:03:18.784 14:04:15 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:03:18.784 14:04:15 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:18.784 14:04:15 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:18.784 14:04:15 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:18.784 14:04:15 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:18.784 14:04:15 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:18.784 14:04:15 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:18.784 14:04:15 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:18.784 14:04:15 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:18.784 14:04:15 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:18.784 14:04:15 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:03:18.784 14:04:15 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:03:18.784 14:04:15 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:18.784 14:04:15 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:18.784 14:04:15 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:18.784 14:04:15 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:03:18.784 14:04:15 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:03:18.784 14:04:15 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:18.784 14:04:15 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:18.784 14:04:15 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:18.784 14:04:15 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n2 00:03:18.784 14:04:15 -- common/autotest_common.sh@1657 -- # local device=nvme2n2 00:03:18.784 14:04:15 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:18.784 14:04:15 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:18.784 14:04:15 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:18.784 14:04:15 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n3 00:03:18.784 14:04:15 -- common/autotest_common.sh@1657 -- # local device=nvme2n3 00:03:18.784 14:04:15 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:18.784 14:04:15 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:18.784 14:04:15 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:18.784 14:04:15 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3c3n1 00:03:18.784 14:04:15 -- common/autotest_common.sh@1657 -- # local device=nvme3c3n1 00:03:18.784 14:04:15 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:18.784 14:04:15 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:18.784 14:04:15 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:18.784 14:04:15 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:03:18.784 14:04:15 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:03:18.784 14:04:15 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:18.784 14:04:15 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:18.784 14:04:15 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:03:18.784 14:04:15 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme2n2 /dev/nvme2n3 /dev/nvme3n1 00:03:18.784 14:04:15 -- spdk/autotest.sh@108 -- # grep -v p 00:03:18.784 14:04:15 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:18.784 14:04:15 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:18.784 14:04:15 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:03:18.784 14:04:15 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:18.784 14:04:15 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:18.784 No valid GPT data, bailing 00:03:18.784 14:04:15 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:18.784 14:04:15 -- scripts/common.sh@393 -- # pt= 00:03:18.784 14:04:15 -- scripts/common.sh@394 -- # return 1 00:03:18.784 14:04:15 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:18.784 1+0 records in 00:03:18.784 1+0 records out 00:03:18.784 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0276166 s, 38.0 MB/s 00:03:18.784 14:04:15 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:18.784 14:04:15 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:18.784 14:04:15 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:03:18.784 14:04:15 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:03:18.784 14:04:15 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:18.784 No valid GPT data, bailing 00:03:18.784 14:04:15 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:18.784 14:04:15 -- scripts/common.sh@393 -- # pt= 00:03:18.784 14:04:15 -- scripts/common.sh@394 -- # return 1 00:03:18.784 14:04:15 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:18.784 1+0 records in 00:03:18.784 1+0 records out 00:03:18.784 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00544507 s, 193 MB/s 00:03:18.784 14:04:15 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:18.784 14:04:15 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:18.784 14:04:15 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme2n1 00:03:18.784 14:04:15 -- scripts/common.sh@380 -- # local block=/dev/nvme2n1 pt 00:03:18.784 14:04:15 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:18.784 No valid GPT data, bailing 00:03:18.785 14:04:15 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:18.785 14:04:15 -- scripts/common.sh@393 -- # pt= 00:03:18.785 14:04:15 -- scripts/common.sh@394 -- # return 1 00:03:18.785 14:04:15 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:18.785 1+0 records in 00:03:18.785 1+0 records out 00:03:18.785 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00520205 s, 202 MB/s 00:03:18.785 14:04:15 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:18.785 14:04:15 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:18.785 14:04:15 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme2n2 00:03:18.785 14:04:15 -- scripts/common.sh@380 -- # local block=/dev/nvme2n2 pt 00:03:18.785 14:04:15 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:03:18.785 No valid GPT data, bailing 00:03:18.785 14:04:15 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:03:18.785 14:04:15 -- scripts/common.sh@393 -- # pt= 00:03:18.785 14:04:15 -- scripts/common.sh@394 -- # return 1 00:03:18.785 14:04:15 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:03:18.785 1+0 records in 00:03:18.785 1+0 records out 00:03:18.785 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00457058 s, 229 MB/s 00:03:18.785 14:04:15 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:18.785 14:04:15 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:18.785 14:04:15 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme2n3 00:03:18.785 14:04:15 -- scripts/common.sh@380 -- # local block=/dev/nvme2n3 pt 00:03:18.785 14:04:15 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:03:18.785 No valid GPT data, bailing 00:03:18.785 14:04:15 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:03:18.785 14:04:15 -- scripts/common.sh@393 -- # pt= 00:03:18.785 14:04:15 -- scripts/common.sh@394 -- # return 1 00:03:18.785 14:04:15 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:03:18.785 1+0 records in 00:03:18.785 1+0 records out 00:03:18.785 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00550115 s, 191 MB/s 00:03:18.785 14:04:15 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:18.785 14:04:15 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:18.785 14:04:15 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme3n1 00:03:18.785 14:04:15 -- scripts/common.sh@380 -- # local block=/dev/nvme3n1 pt 00:03:18.785 14:04:15 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:18.785 No valid GPT data, bailing 00:03:18.785 14:04:15 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:18.785 14:04:15 -- scripts/common.sh@393 -- # pt= 00:03:18.785 14:04:15 -- scripts/common.sh@394 -- # return 1 00:03:18.785 14:04:15 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:18.785 1+0 records in 00:03:18.785 1+0 records out 00:03:18.785 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00656591 s, 160 MB/s 00:03:18.785 14:04:15 -- spdk/autotest.sh@116 -- # sync 00:03:18.785 14:04:16 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:18.785 14:04:16 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:18.785 14:04:16 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:19.359 14:04:17 -- spdk/autotest.sh@122 -- # uname -s 00:03:19.359 14:04:17 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:03:19.359 14:04:17 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:19.359 14:04:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:19.359 14:04:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:19.359 14:04:17 -- common/autotest_common.sh@10 -- # set +x 00:03:19.359 ************************************ 00:03:19.359 START TEST setup.sh 00:03:19.359 ************************************ 00:03:19.359 14:04:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:19.359 * Looking for test storage... 00:03:19.359 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:19.359 14:04:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:19.359 14:04:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:19.359 14:04:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:19.359 14:04:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:19.359 14:04:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:19.359 14:04:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:19.359 14:04:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:19.359 14:04:17 -- scripts/common.sh@335 -- # IFS=.-: 00:03:19.359 14:04:17 -- scripts/common.sh@335 -- # read -ra ver1 00:03:19.359 14:04:17 -- scripts/common.sh@336 -- # IFS=.-: 00:03:19.359 14:04:17 -- scripts/common.sh@336 -- # read -ra ver2 00:03:19.359 14:04:17 -- scripts/common.sh@337 -- # local 'op=<' 00:03:19.359 14:04:17 -- scripts/common.sh@339 -- # ver1_l=2 00:03:19.359 14:04:17 -- scripts/common.sh@340 -- # ver2_l=1 00:03:19.359 14:04:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:19.359 14:04:17 -- scripts/common.sh@343 -- # case "$op" in 00:03:19.359 14:04:17 -- scripts/common.sh@344 -- # : 1 00:03:19.359 14:04:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:19.359 14:04:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:19.359 14:04:17 -- scripts/common.sh@364 -- # decimal 1 00:03:19.359 14:04:17 -- scripts/common.sh@352 -- # local d=1 00:03:19.359 14:04:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:19.359 14:04:17 -- scripts/common.sh@354 -- # echo 1 00:03:19.359 14:04:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:19.359 14:04:17 -- scripts/common.sh@365 -- # decimal 2 00:03:19.359 14:04:17 -- scripts/common.sh@352 -- # local d=2 00:03:19.359 14:04:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:19.359 14:04:17 -- scripts/common.sh@354 -- # echo 2 00:03:19.359 14:04:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:19.359 14:04:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:19.359 14:04:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:19.359 14:04:17 -- scripts/common.sh@367 -- # return 0 00:03:19.359 14:04:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:19.359 14:04:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:19.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:19.359 --rc genhtml_branch_coverage=1 00:03:19.359 --rc genhtml_function_coverage=1 00:03:19.359 --rc genhtml_legend=1 00:03:19.359 --rc geninfo_all_blocks=1 00:03:19.359 --rc geninfo_unexecuted_blocks=1 00:03:19.359 00:03:19.359 ' 00:03:19.359 14:04:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:19.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:19.359 --rc genhtml_branch_coverage=1 00:03:19.359 --rc genhtml_function_coverage=1 00:03:19.359 --rc genhtml_legend=1 00:03:19.359 --rc geninfo_all_blocks=1 00:03:19.360 --rc geninfo_unexecuted_blocks=1 00:03:19.360 00:03:19.360 ' 00:03:19.360 14:04:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:19.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:19.360 --rc genhtml_branch_coverage=1 00:03:19.360 --rc genhtml_function_coverage=1 00:03:19.360 --rc genhtml_legend=1 00:03:19.360 --rc geninfo_all_blocks=1 00:03:19.360 --rc geninfo_unexecuted_blocks=1 00:03:19.360 00:03:19.360 ' 00:03:19.360 14:04:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:19.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:19.360 --rc genhtml_branch_coverage=1 00:03:19.360 --rc genhtml_function_coverage=1 00:03:19.360 --rc genhtml_legend=1 00:03:19.360 --rc geninfo_all_blocks=1 00:03:19.360 --rc geninfo_unexecuted_blocks=1 00:03:19.360 00:03:19.360 ' 00:03:19.360 14:04:17 -- setup/test-setup.sh@10 -- # uname -s 00:03:19.360 14:04:17 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:19.360 14:04:17 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:19.360 14:04:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:19.360 14:04:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:19.360 14:04:17 -- common/autotest_common.sh@10 -- # set +x 00:03:19.360 ************************************ 00:03:19.360 START TEST acl 00:03:19.360 ************************************ 00:03:19.360 14:04:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:19.622 * Looking for test storage... 00:03:19.622 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:19.622 14:04:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:19.622 14:04:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:19.622 14:04:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:19.622 14:04:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:19.622 14:04:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:19.622 14:04:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:19.622 14:04:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:19.622 14:04:18 -- scripts/common.sh@335 -- # IFS=.-: 00:03:19.622 14:04:18 -- scripts/common.sh@335 -- # read -ra ver1 00:03:19.622 14:04:18 -- scripts/common.sh@336 -- # IFS=.-: 00:03:19.622 14:04:18 -- scripts/common.sh@336 -- # read -ra ver2 00:03:19.622 14:04:18 -- scripts/common.sh@337 -- # local 'op=<' 00:03:19.622 14:04:18 -- scripts/common.sh@339 -- # ver1_l=2 00:03:19.622 14:04:18 -- scripts/common.sh@340 -- # ver2_l=1 00:03:19.622 14:04:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:19.622 14:04:18 -- scripts/common.sh@343 -- # case "$op" in 00:03:19.622 14:04:18 -- scripts/common.sh@344 -- # : 1 00:03:19.622 14:04:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:19.622 14:04:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:19.622 14:04:18 -- scripts/common.sh@364 -- # decimal 1 00:03:19.622 14:04:18 -- scripts/common.sh@352 -- # local d=1 00:03:19.622 14:04:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:19.622 14:04:18 -- scripts/common.sh@354 -- # echo 1 00:03:19.622 14:04:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:19.622 14:04:18 -- scripts/common.sh@365 -- # decimal 2 00:03:19.622 14:04:18 -- scripts/common.sh@352 -- # local d=2 00:03:19.622 14:04:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:19.622 14:04:18 -- scripts/common.sh@354 -- # echo 2 00:03:19.622 14:04:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:19.622 14:04:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:19.622 14:04:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:19.622 14:04:18 -- scripts/common.sh@367 -- # return 0 00:03:19.622 14:04:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:19.623 14:04:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:19.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:19.623 --rc genhtml_branch_coverage=1 00:03:19.623 --rc genhtml_function_coverage=1 00:03:19.623 --rc genhtml_legend=1 00:03:19.623 --rc geninfo_all_blocks=1 00:03:19.623 --rc geninfo_unexecuted_blocks=1 00:03:19.623 00:03:19.623 ' 00:03:19.623 14:04:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:19.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:19.623 --rc genhtml_branch_coverage=1 00:03:19.623 --rc genhtml_function_coverage=1 00:03:19.623 --rc genhtml_legend=1 00:03:19.623 --rc geninfo_all_blocks=1 00:03:19.623 --rc geninfo_unexecuted_blocks=1 00:03:19.623 00:03:19.623 ' 00:03:19.623 14:04:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:19.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:19.623 --rc genhtml_branch_coverage=1 00:03:19.623 --rc genhtml_function_coverage=1 00:03:19.623 --rc genhtml_legend=1 00:03:19.623 --rc geninfo_all_blocks=1 00:03:19.623 --rc geninfo_unexecuted_blocks=1 00:03:19.623 00:03:19.623 ' 00:03:19.623 14:04:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:19.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:19.623 --rc genhtml_branch_coverage=1 00:03:19.623 --rc genhtml_function_coverage=1 00:03:19.623 --rc genhtml_legend=1 00:03:19.623 --rc geninfo_all_blocks=1 00:03:19.623 --rc geninfo_unexecuted_blocks=1 00:03:19.623 00:03:19.623 ' 00:03:19.623 14:04:18 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:19.623 14:04:18 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:19.623 14:04:18 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:19.623 14:04:18 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:19.623 14:04:18 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:19.623 14:04:18 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:19.623 14:04:18 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:19.623 14:04:18 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:19.623 14:04:18 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:19.623 14:04:18 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:19.623 14:04:18 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:03:19.623 14:04:18 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:03:19.623 14:04:18 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:19.623 14:04:18 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:19.623 14:04:18 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:19.623 14:04:18 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:03:19.623 14:04:18 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:03:19.623 14:04:18 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:19.623 14:04:18 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:19.623 14:04:18 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:19.623 14:04:18 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n2 00:03:19.623 14:04:18 -- common/autotest_common.sh@1657 -- # local device=nvme2n2 00:03:19.623 14:04:18 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:19.623 14:04:18 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:19.623 14:04:18 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:19.623 14:04:18 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n3 00:03:19.623 14:04:18 -- common/autotest_common.sh@1657 -- # local device=nvme2n3 00:03:19.623 14:04:18 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:19.623 14:04:18 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:19.623 14:04:18 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:19.623 14:04:18 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3c3n1 00:03:19.623 14:04:18 -- common/autotest_common.sh@1657 -- # local device=nvme3c3n1 00:03:19.623 14:04:18 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:19.623 14:04:18 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:19.623 14:04:18 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:19.623 14:04:18 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:03:19.623 14:04:18 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:03:19.623 14:04:18 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:19.623 14:04:18 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:19.623 14:04:18 -- setup/acl.sh@12 -- # devs=() 00:03:19.623 14:04:18 -- setup/acl.sh@12 -- # declare -a devs 00:03:19.623 14:04:18 -- setup/acl.sh@13 -- # drivers=() 00:03:19.623 14:04:18 -- setup/acl.sh@13 -- # declare -A drivers 00:03:19.623 14:04:18 -- setup/acl.sh@51 -- # setup reset 00:03:19.623 14:04:18 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:19.623 14:04:18 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:21.010 14:04:19 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:21.010 14:04:19 -- setup/acl.sh@16 -- # local dev driver 00:03:21.010 14:04:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.010 14:04:19 -- setup/acl.sh@15 -- # setup output status 00:03:21.010 14:04:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.010 14:04:19 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:21.010 Hugepages 00:03:21.010 node hugesize free / total 00:03:21.010 14:04:19 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:21.010 14:04:19 -- setup/acl.sh@19 -- # continue 00:03:21.010 14:04:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.010 00:03:21.010 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:21.010 14:04:19 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:21.010 14:04:19 -- setup/acl.sh@19 -- # continue 00:03:21.010 14:04:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.010 14:04:19 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:21.010 14:04:19 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:21.010 14:04:19 -- setup/acl.sh@20 -- # continue 00:03:21.010 14:04:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.010 14:04:19 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:03:21.010 14:04:19 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:21.010 14:04:19 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:03:21.010 14:04:19 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:21.010 14:04:19 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:21.010 14:04:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.010 14:04:19 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:03:21.010 14:04:19 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:21.011 14:04:19 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:21.011 14:04:19 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:21.011 14:04:19 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:21.011 14:04:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.272 14:04:19 -- setup/acl.sh@19 -- # [[ 0000:00:08.0 == *:*:*.* ]] 00:03:21.272 14:04:19 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:21.272 14:04:19 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:03:21.272 14:04:19 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:21.272 14:04:19 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:21.272 14:04:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.272 14:04:19 -- setup/acl.sh@19 -- # [[ 0000:00:09.0 == *:*:*.* ]] 00:03:21.272 14:04:19 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:21.272 14:04:19 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:03:21.272 14:04:19 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:21.272 14:04:19 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:21.272 14:04:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.272 14:04:19 -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:03:21.272 14:04:19 -- setup/acl.sh@54 -- # run_test denied denied 00:03:21.272 14:04:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:21.272 14:04:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:21.272 14:04:19 -- common/autotest_common.sh@10 -- # set +x 00:03:21.272 ************************************ 00:03:21.272 START TEST denied 00:03:21.272 ************************************ 00:03:21.272 14:04:19 -- common/autotest_common.sh@1114 -- # denied 00:03:21.272 14:04:19 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:03:21.272 14:04:19 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:03:21.272 14:04:19 -- setup/acl.sh@38 -- # setup output config 00:03:21.272 14:04:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.272 14:04:19 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:22.660 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:03:22.660 14:04:20 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:03:22.660 14:04:20 -- setup/acl.sh@28 -- # local dev driver 00:03:22.660 14:04:20 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:22.660 14:04:20 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:03:22.660 14:04:20 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:03:22.660 14:04:20 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:22.660 14:04:20 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:22.660 14:04:20 -- setup/acl.sh@41 -- # setup reset 00:03:22.660 14:04:20 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:22.660 14:04:20 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:29.250 00:03:29.250 real 0m7.126s 00:03:29.250 user 0m0.726s 00:03:29.250 sys 0m1.226s 00:03:29.250 14:04:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:29.250 ************************************ 00:03:29.250 END TEST denied 00:03:29.250 ************************************ 00:03:29.250 14:04:26 -- common/autotest_common.sh@10 -- # set +x 00:03:29.250 14:04:26 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:29.250 14:04:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:29.250 14:04:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:29.250 14:04:26 -- common/autotest_common.sh@10 -- # set +x 00:03:29.250 ************************************ 00:03:29.250 START TEST allowed 00:03:29.250 ************************************ 00:03:29.250 14:04:26 -- common/autotest_common.sh@1114 -- # allowed 00:03:29.250 14:04:26 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:03:29.250 14:04:26 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:03:29.250 14:04:26 -- setup/acl.sh@45 -- # setup output config 00:03:29.250 14:04:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.250 14:04:26 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:29.511 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:29.511 14:04:27 -- setup/acl.sh@47 -- # verify 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:03:29.511 14:04:27 -- setup/acl.sh@28 -- # local dev driver 00:03:29.511 14:04:27 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:29.511 14:04:27 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:03:29.511 14:04:27 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:03:29.511 14:04:27 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:29.511 14:04:27 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:29.511 14:04:27 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:29.511 14:04:27 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:08.0 ]] 00:03:29.511 14:04:27 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:08.0/driver 00:03:29.511 14:04:27 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:29.511 14:04:27 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:29.511 14:04:27 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:29.511 14:04:27 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:09.0 ]] 00:03:29.511 14:04:27 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:09.0/driver 00:03:29.511 14:04:28 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:29.511 14:04:28 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:29.511 14:04:28 -- setup/acl.sh@48 -- # setup reset 00:03:29.511 14:04:28 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:29.511 14:04:28 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:30.897 00:03:30.897 real 0m2.281s 00:03:30.897 user 0m0.882s 00:03:30.897 sys 0m1.122s 00:03:30.897 14:04:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:30.897 ************************************ 00:03:30.897 END TEST allowed 00:03:30.897 ************************************ 00:03:30.897 14:04:29 -- common/autotest_common.sh@10 -- # set +x 00:03:30.897 00:03:30.897 real 0m11.292s 00:03:30.897 user 0m2.305s 00:03:30.897 sys 0m3.388s 00:03:30.897 14:04:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:30.897 ************************************ 00:03:30.897 END TEST acl 00:03:30.897 ************************************ 00:03:30.897 14:04:29 -- common/autotest_common.sh@10 -- # set +x 00:03:30.897 14:04:29 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:30.897 14:04:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:30.897 14:04:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:30.897 14:04:29 -- common/autotest_common.sh@10 -- # set +x 00:03:30.897 ************************************ 00:03:30.897 START TEST hugepages 00:03:30.897 ************************************ 00:03:30.897 14:04:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:30.897 * Looking for test storage... 00:03:30.897 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:30.897 14:04:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:30.897 14:04:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:30.897 14:04:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:30.897 14:04:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:30.897 14:04:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:30.897 14:04:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:30.897 14:04:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:30.897 14:04:29 -- scripts/common.sh@335 -- # IFS=.-: 00:03:30.897 14:04:29 -- scripts/common.sh@335 -- # read -ra ver1 00:03:30.897 14:04:29 -- scripts/common.sh@336 -- # IFS=.-: 00:03:30.897 14:04:29 -- scripts/common.sh@336 -- # read -ra ver2 00:03:30.897 14:04:29 -- scripts/common.sh@337 -- # local 'op=<' 00:03:30.897 14:04:29 -- scripts/common.sh@339 -- # ver1_l=2 00:03:30.897 14:04:29 -- scripts/common.sh@340 -- # ver2_l=1 00:03:30.897 14:04:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:30.897 14:04:29 -- scripts/common.sh@343 -- # case "$op" in 00:03:30.897 14:04:29 -- scripts/common.sh@344 -- # : 1 00:03:30.897 14:04:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:30.897 14:04:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:30.897 14:04:29 -- scripts/common.sh@364 -- # decimal 1 00:03:30.897 14:04:29 -- scripts/common.sh@352 -- # local d=1 00:03:30.897 14:04:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:30.897 14:04:29 -- scripts/common.sh@354 -- # echo 1 00:03:30.897 14:04:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:30.897 14:04:29 -- scripts/common.sh@365 -- # decimal 2 00:03:30.897 14:04:29 -- scripts/common.sh@352 -- # local d=2 00:03:30.897 14:04:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:30.897 14:04:29 -- scripts/common.sh@354 -- # echo 2 00:03:30.897 14:04:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:30.897 14:04:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:30.897 14:04:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:30.897 14:04:29 -- scripts/common.sh@367 -- # return 0 00:03:30.897 14:04:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:30.897 14:04:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:30.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.897 --rc genhtml_branch_coverage=1 00:03:30.897 --rc genhtml_function_coverage=1 00:03:30.897 --rc genhtml_legend=1 00:03:30.897 --rc geninfo_all_blocks=1 00:03:30.897 --rc geninfo_unexecuted_blocks=1 00:03:30.897 00:03:30.897 ' 00:03:30.897 14:04:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:30.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.897 --rc genhtml_branch_coverage=1 00:03:30.897 --rc genhtml_function_coverage=1 00:03:30.897 --rc genhtml_legend=1 00:03:30.897 --rc geninfo_all_blocks=1 00:03:30.897 --rc geninfo_unexecuted_blocks=1 00:03:30.897 00:03:30.897 ' 00:03:30.897 14:04:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:30.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.898 --rc genhtml_branch_coverage=1 00:03:30.898 --rc genhtml_function_coverage=1 00:03:30.898 --rc genhtml_legend=1 00:03:30.898 --rc geninfo_all_blocks=1 00:03:30.898 --rc geninfo_unexecuted_blocks=1 00:03:30.898 00:03:30.898 ' 00:03:30.898 14:04:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:30.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.898 --rc genhtml_branch_coverage=1 00:03:30.898 --rc genhtml_function_coverage=1 00:03:30.898 --rc genhtml_legend=1 00:03:30.898 --rc geninfo_all_blocks=1 00:03:30.898 --rc geninfo_unexecuted_blocks=1 00:03:30.898 00:03:30.898 ' 00:03:30.898 14:04:29 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:30.898 14:04:29 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:30.898 14:04:29 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:30.898 14:04:29 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:30.898 14:04:29 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:30.898 14:04:29 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:30.898 14:04:29 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:30.898 14:04:29 -- setup/common.sh@18 -- # local node= 00:03:30.898 14:04:29 -- setup/common.sh@19 -- # local var val 00:03:30.898 14:04:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.898 14:04:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.898 14:04:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.898 14:04:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.898 14:04:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.898 14:04:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.898 14:04:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 5796392 kB' 'MemAvailable: 7365796 kB' 'Buffers: 3704 kB' 'Cached: 1781304 kB' 'SwapCached: 0 kB' 'Active: 469032 kB' 'Inactive: 1431592 kB' 'Active(anon): 126148 kB' 'Inactive(anon): 0 kB' 'Active(file): 342884 kB' 'Inactive(file): 1431592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 117312 kB' 'Mapped: 53772 kB' 'Shmem: 10532 kB' 'KReclaimable: 63712 kB' 'Slab: 162552 kB' 'SReclaimable: 63712 kB' 'SUnreclaim: 98840 kB' 'KernelStack: 6624 kB' 'PageTables: 4036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12410000 kB' 'Committed_AS: 319720 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55624 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.898 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.898 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.899 14:04:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.899 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.899 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.899 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.899 14:04:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.899 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.899 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.899 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.899 14:04:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.899 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.899 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.899 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.899 14:04:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.899 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.899 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.899 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.899 14:04:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.899 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.899 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.899 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.899 14:04:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.899 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.899 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.899 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.899 14:04:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.899 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.899 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.899 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.899 14:04:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.899 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.899 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.899 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.899 14:04:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.899 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.899 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.899 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.899 14:04:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.899 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.899 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.899 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.899 14:04:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.899 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.899 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.899 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.899 14:04:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.899 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.899 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.899 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.899 14:04:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.899 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.899 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.899 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.899 14:04:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.899 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.899 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.899 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.899 14:04:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.899 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.899 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.899 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.899 14:04:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.899 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.899 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.899 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.899 14:04:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.899 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.899 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.899 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.899 14:04:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.899 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.899 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.899 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.899 14:04:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.899 14:04:29 -- setup/common.sh@32 -- # continue 00:03:30.899 14:04:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.899 14:04:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.899 14:04:29 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.899 14:04:29 -- setup/common.sh@33 -- # echo 2048 00:03:30.899 14:04:29 -- setup/common.sh@33 -- # return 0 00:03:30.899 14:04:29 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:30.899 14:04:29 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:30.899 14:04:29 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:30.899 14:04:29 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:30.899 14:04:29 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:30.899 14:04:29 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:30.899 14:04:29 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:30.899 14:04:29 -- setup/hugepages.sh@207 -- # get_nodes 00:03:30.899 14:04:29 -- setup/hugepages.sh@27 -- # local node 00:03:30.899 14:04:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.899 14:04:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:30.899 14:04:29 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:30.899 14:04:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:30.899 14:04:29 -- setup/hugepages.sh@208 -- # clear_hp 00:03:30.899 14:04:29 -- setup/hugepages.sh@37 -- # local node hp 00:03:30.899 14:04:29 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:30.899 14:04:29 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.899 14:04:29 -- setup/hugepages.sh@41 -- # echo 0 00:03:30.899 14:04:29 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.899 14:04:29 -- setup/hugepages.sh@41 -- # echo 0 00:03:30.899 14:04:29 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:30.899 14:04:29 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:30.899 14:04:29 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:30.899 14:04:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:30.899 14:04:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:30.899 14:04:29 -- common/autotest_common.sh@10 -- # set +x 00:03:30.899 ************************************ 00:03:30.899 START TEST default_setup 00:03:30.899 ************************************ 00:03:30.899 14:04:29 -- common/autotest_common.sh@1114 -- # default_setup 00:03:30.899 14:04:29 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:30.899 14:04:29 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:30.899 14:04:29 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:30.899 14:04:29 -- setup/hugepages.sh@51 -- # shift 00:03:30.899 14:04:29 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:30.899 14:04:29 -- setup/hugepages.sh@52 -- # local node_ids 00:03:30.899 14:04:29 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:30.899 14:04:29 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:30.899 14:04:29 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:30.899 14:04:29 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:30.899 14:04:29 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:30.899 14:04:29 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:30.899 14:04:29 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:30.899 14:04:29 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:30.899 14:04:29 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:30.899 14:04:29 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:30.899 14:04:29 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:30.899 14:04:29 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:30.899 14:04:29 -- setup/hugepages.sh@73 -- # return 0 00:03:30.899 14:04:29 -- setup/hugepages.sh@137 -- # setup output 00:03:30.899 14:04:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.899 14:04:29 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:32.290 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:32.290 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:03:32.290 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:32.290 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:03:32.290 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:03:32.290 14:04:30 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:32.290 14:04:30 -- setup/hugepages.sh@89 -- # local node 00:03:32.290 14:04:30 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:32.290 14:04:30 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:32.290 14:04:30 -- setup/hugepages.sh@92 -- # local surp 00:03:32.290 14:04:30 -- setup/hugepages.sh@93 -- # local resv 00:03:32.290 14:04:30 -- setup/hugepages.sh@94 -- # local anon 00:03:32.290 14:04:30 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:32.290 14:04:30 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:32.290 14:04:30 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:32.290 14:04:30 -- setup/common.sh@18 -- # local node= 00:03:32.290 14:04:30 -- setup/common.sh@19 -- # local var val 00:03:32.290 14:04:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.290 14:04:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.290 14:04:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.290 14:04:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.290 14:04:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.290 14:04:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.290 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.290 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.290 14:04:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7915556 kB' 'MemAvailable: 9484776 kB' 'Buffers: 3704 kB' 'Cached: 1781288 kB' 'SwapCached: 0 kB' 'Active: 471228 kB' 'Inactive: 1431616 kB' 'Active(anon): 128344 kB' 'Inactive(anon): 0 kB' 'Active(file): 342884 kB' 'Inactive(file): 1431616 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119440 kB' 'Mapped: 53636 kB' 'Shmem: 10492 kB' 'KReclaimable: 63296 kB' 'Slab: 162392 kB' 'SReclaimable: 63296 kB' 'SUnreclaim: 99096 kB' 'KernelStack: 6608 kB' 'PageTables: 4064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 327480 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55688 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:03:32.290 14:04:30 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.290 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.290 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.290 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.290 14:04:30 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.290 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.290 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.290 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.290 14:04:30 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.290 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.290 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.290 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.290 14:04:30 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.290 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.290 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.290 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.290 14:04:30 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.290 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.290 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.290 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.290 14:04:30 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.290 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.290 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.290 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.290 14:04:30 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.290 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.290 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.290 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.290 14:04:30 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.290 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.290 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.290 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.290 14:04:30 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.290 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.290 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.290 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.290 14:04:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.290 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.290 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.290 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.290 14:04:30 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.290 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.290 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.291 14:04:30 -- setup/common.sh@33 -- # echo 0 00:03:32.291 14:04:30 -- setup/common.sh@33 -- # return 0 00:03:32.291 14:04:30 -- setup/hugepages.sh@97 -- # anon=0 00:03:32.291 14:04:30 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:32.291 14:04:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.291 14:04:30 -- setup/common.sh@18 -- # local node= 00:03:32.291 14:04:30 -- setup/common.sh@19 -- # local var val 00:03:32.291 14:04:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.291 14:04:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.291 14:04:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.291 14:04:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.291 14:04:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.291 14:04:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 14:04:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7915816 kB' 'MemAvailable: 9485036 kB' 'Buffers: 3704 kB' 'Cached: 1781288 kB' 'SwapCached: 0 kB' 'Active: 471024 kB' 'Inactive: 1431616 kB' 'Active(anon): 128140 kB' 'Inactive(anon): 0 kB' 'Active(file): 342884 kB' 'Inactive(file): 1431616 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119240 kB' 'Mapped: 53532 kB' 'Shmem: 10492 kB' 'KReclaimable: 63296 kB' 'Slab: 162368 kB' 'SReclaimable: 63296 kB' 'SUnreclaim: 99072 kB' 'KernelStack: 6624 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 327480 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55688 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.291 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.291 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.292 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.292 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.293 14:04:30 -- setup/common.sh@33 -- # echo 0 00:03:32.293 14:04:30 -- setup/common.sh@33 -- # return 0 00:03:32.293 14:04:30 -- setup/hugepages.sh@99 -- # surp=0 00:03:32.293 14:04:30 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:32.293 14:04:30 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:32.293 14:04:30 -- setup/common.sh@18 -- # local node= 00:03:32.293 14:04:30 -- setup/common.sh@19 -- # local var val 00:03:32.293 14:04:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.293 14:04:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.293 14:04:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.293 14:04:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.293 14:04:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.293 14:04:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.293 14:04:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7915816 kB' 'MemAvailable: 9485036 kB' 'Buffers: 3704 kB' 'Cached: 1781288 kB' 'SwapCached: 0 kB' 'Active: 471004 kB' 'Inactive: 1431616 kB' 'Active(anon): 128120 kB' 'Inactive(anon): 0 kB' 'Active(file): 342884 kB' 'Inactive(file): 1431616 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119192 kB' 'Mapped: 53532 kB' 'Shmem: 10492 kB' 'KReclaimable: 63296 kB' 'Slab: 162364 kB' 'SReclaimable: 63296 kB' 'SUnreclaim: 99068 kB' 'KernelStack: 6608 kB' 'PageTables: 4052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 327480 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.293 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.293 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.294 14:04:30 -- setup/common.sh@33 -- # echo 0 00:03:32.294 14:04:30 -- setup/common.sh@33 -- # return 0 00:03:32.294 14:04:30 -- setup/hugepages.sh@100 -- # resv=0 00:03:32.294 nr_hugepages=1024 00:03:32.294 14:04:30 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:32.294 resv_hugepages=0 00:03:32.294 surplus_hugepages=0 00:03:32.294 14:04:30 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:32.294 14:04:30 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:32.294 anon_hugepages=0 00:03:32.294 14:04:30 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:32.294 14:04:30 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:32.294 14:04:30 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:32.294 14:04:30 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:32.294 14:04:30 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:32.294 14:04:30 -- setup/common.sh@18 -- # local node= 00:03:32.294 14:04:30 -- setup/common.sh@19 -- # local var val 00:03:32.294 14:04:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.294 14:04:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.294 14:04:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.294 14:04:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.294 14:04:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.294 14:04:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.294 14:04:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7915952 kB' 'MemAvailable: 9485172 kB' 'Buffers: 3704 kB' 'Cached: 1781288 kB' 'SwapCached: 0 kB' 'Active: 470816 kB' 'Inactive: 1431616 kB' 'Active(anon): 127932 kB' 'Inactive(anon): 0 kB' 'Active(file): 342884 kB' 'Inactive(file): 1431616 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119004 kB' 'Mapped: 53532 kB' 'Shmem: 10492 kB' 'KReclaimable: 63296 kB' 'Slab: 162360 kB' 'SReclaimable: 63296 kB' 'SUnreclaim: 99064 kB' 'KernelStack: 6592 kB' 'PageTables: 4008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 327480 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.294 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.294 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.295 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.295 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.296 14:04:30 -- setup/common.sh@33 -- # echo 1024 00:03:32.296 14:04:30 -- setup/common.sh@33 -- # return 0 00:03:32.296 14:04:30 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:32.296 14:04:30 -- setup/hugepages.sh@112 -- # get_nodes 00:03:32.296 14:04:30 -- setup/hugepages.sh@27 -- # local node 00:03:32.296 14:04:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.296 14:04:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:32.296 14:04:30 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:32.296 14:04:30 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:32.296 14:04:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:32.296 14:04:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:32.296 14:04:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:32.296 14:04:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.296 14:04:30 -- setup/common.sh@18 -- # local node=0 00:03:32.296 14:04:30 -- setup/common.sh@19 -- # local var val 00:03:32.296 14:04:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.296 14:04:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.296 14:04:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:32.296 14:04:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:32.296 14:04:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.296 14:04:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.296 14:04:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7916212 kB' 'MemUsed: 4320884 kB' 'SwapCached: 0 kB' 'Active: 471076 kB' 'Inactive: 1431616 kB' 'Active(anon): 128192 kB' 'Inactive(anon): 0 kB' 'Active(file): 342884 kB' 'Inactive(file): 1431616 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'FilePages: 1784992 kB' 'Mapped: 53532 kB' 'AnonPages: 119264 kB' 'Shmem: 10492 kB' 'KernelStack: 6592 kB' 'PageTables: 4008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63296 kB' 'Slab: 162360 kB' 'SReclaimable: 63296 kB' 'SUnreclaim: 99064 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.296 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.296 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.297 14:04:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.297 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.297 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.297 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.297 14:04:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.297 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.297 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.297 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.297 14:04:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.297 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.297 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.297 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.297 14:04:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.297 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.297 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.297 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.297 14:04:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.297 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.297 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.297 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.297 14:04:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.297 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.297 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.297 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.297 14:04:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.297 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.297 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.297 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.297 14:04:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.297 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.297 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.297 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.297 14:04:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.297 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.297 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.297 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.297 14:04:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.297 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.297 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.297 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.297 14:04:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.297 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.297 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.297 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.297 14:04:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.297 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.297 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.297 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.297 14:04:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.297 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.297 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.297 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.297 14:04:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.297 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.297 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.297 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.297 14:04:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.297 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.297 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.297 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.297 14:04:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.297 14:04:30 -- setup/common.sh@32 -- # continue 00:03:32.297 14:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.297 14:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.297 14:04:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.297 14:04:30 -- setup/common.sh@33 -- # echo 0 00:03:32.297 14:04:30 -- setup/common.sh@33 -- # return 0 00:03:32.297 14:04:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:32.297 14:04:30 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:32.297 14:04:30 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:32.297 14:04:30 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:32.297 node0=1024 expecting 1024 00:03:32.297 14:04:30 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:32.297 14:04:30 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:32.297 ************************************ 00:03:32.297 END TEST default_setup 00:03:32.297 ************************************ 00:03:32.297 00:03:32.297 real 0m1.335s 00:03:32.297 user 0m0.522s 00:03:32.297 sys 0m0.670s 00:03:32.297 14:04:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:32.297 14:04:30 -- common/autotest_common.sh@10 -- # set +x 00:03:32.297 14:04:30 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:32.297 14:04:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:32.297 14:04:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:32.297 14:04:30 -- common/autotest_common.sh@10 -- # set +x 00:03:32.559 ************************************ 00:03:32.559 START TEST per_node_1G_alloc 00:03:32.559 ************************************ 00:03:32.559 14:04:30 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:03:32.559 14:04:30 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:32.559 14:04:30 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:32.559 14:04:30 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:32.559 14:04:30 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:32.559 14:04:30 -- setup/hugepages.sh@51 -- # shift 00:03:32.559 14:04:30 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:32.559 14:04:30 -- setup/hugepages.sh@52 -- # local node_ids 00:03:32.559 14:04:30 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:32.559 14:04:30 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:32.559 14:04:30 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:32.559 14:04:30 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:32.559 14:04:30 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:32.559 14:04:30 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:32.559 14:04:30 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:32.559 14:04:30 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:32.559 14:04:30 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:32.559 14:04:30 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:32.559 14:04:30 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:32.559 14:04:30 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:32.559 14:04:30 -- setup/hugepages.sh@73 -- # return 0 00:03:32.559 14:04:30 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:32.559 14:04:30 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:32.559 14:04:30 -- setup/hugepages.sh@146 -- # setup output 00:03:32.559 14:04:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.559 14:04:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:32.821 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:32.821 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:32.821 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:32.821 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:32.821 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:33.086 14:04:31 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:33.086 14:04:31 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:33.086 14:04:31 -- setup/hugepages.sh@89 -- # local node 00:03:33.086 14:04:31 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:33.086 14:04:31 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:33.086 14:04:31 -- setup/hugepages.sh@92 -- # local surp 00:03:33.086 14:04:31 -- setup/hugepages.sh@93 -- # local resv 00:03:33.086 14:04:31 -- setup/hugepages.sh@94 -- # local anon 00:03:33.086 14:04:31 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:33.086 14:04:31 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:33.086 14:04:31 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:33.086 14:04:31 -- setup/common.sh@18 -- # local node= 00:03:33.086 14:04:31 -- setup/common.sh@19 -- # local var val 00:03:33.086 14:04:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.086 14:04:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.086 14:04:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.086 14:04:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.087 14:04:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.087 14:04:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 14:04:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8964556 kB' 'MemAvailable: 10533776 kB' 'Buffers: 3704 kB' 'Cached: 1781288 kB' 'SwapCached: 0 kB' 'Active: 471824 kB' 'Inactive: 1431616 kB' 'Active(anon): 128940 kB' 'Inactive(anon): 0 kB' 'Active(file): 342884 kB' 'Inactive(file): 1431616 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 120012 kB' 'Mapped: 53700 kB' 'Shmem: 10492 kB' 'KReclaimable: 63296 kB' 'Slab: 162496 kB' 'SReclaimable: 63296 kB' 'SUnreclaim: 99200 kB' 'KernelStack: 6668 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 327480 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55688 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 14:04:31 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.088 14:04:31 -- setup/common.sh@33 -- # echo 0 00:03:33.088 14:04:31 -- setup/common.sh@33 -- # return 0 00:03:33.088 14:04:31 -- setup/hugepages.sh@97 -- # anon=0 00:03:33.088 14:04:31 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:33.088 14:04:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.088 14:04:31 -- setup/common.sh@18 -- # local node= 00:03:33.088 14:04:31 -- setup/common.sh@19 -- # local var val 00:03:33.088 14:04:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.088 14:04:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.088 14:04:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.088 14:04:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.088 14:04:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.088 14:04:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 14:04:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8964808 kB' 'MemAvailable: 10534028 kB' 'Buffers: 3704 kB' 'Cached: 1781288 kB' 'SwapCached: 0 kB' 'Active: 471084 kB' 'Inactive: 1431616 kB' 'Active(anon): 128200 kB' 'Inactive(anon): 0 kB' 'Active(file): 342884 kB' 'Inactive(file): 1431616 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119292 kB' 'Mapped: 53584 kB' 'Shmem: 10492 kB' 'KReclaimable: 63296 kB' 'Slab: 162496 kB' 'SReclaimable: 63296 kB' 'SUnreclaim: 99200 kB' 'KernelStack: 6656 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 327480 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 14:04:31 -- setup/common.sh@33 -- # echo 0 00:03:33.089 14:04:31 -- setup/common.sh@33 -- # return 0 00:03:33.089 14:04:31 -- setup/hugepages.sh@99 -- # surp=0 00:03:33.089 14:04:31 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:33.089 14:04:31 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:33.089 14:04:31 -- setup/common.sh@18 -- # local node= 00:03:33.089 14:04:31 -- setup/common.sh@19 -- # local var val 00:03:33.089 14:04:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.089 14:04:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.089 14:04:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.089 14:04:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.089 14:04:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.089 14:04:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 14:04:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8965416 kB' 'MemAvailable: 10534636 kB' 'Buffers: 3704 kB' 'Cached: 1781288 kB' 'SwapCached: 0 kB' 'Active: 470912 kB' 'Inactive: 1431616 kB' 'Active(anon): 128028 kB' 'Inactive(anon): 0 kB' 'Active(file): 342884 kB' 'Inactive(file): 1431616 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119120 kB' 'Mapped: 53584 kB' 'Shmem: 10492 kB' 'KReclaimable: 63296 kB' 'Slab: 162496 kB' 'SReclaimable: 63296 kB' 'SUnreclaim: 99200 kB' 'KernelStack: 6656 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 327480 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 14:04:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.090 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.091 14:04:31 -- setup/common.sh@33 -- # echo 0 00:03:33.091 14:04:31 -- setup/common.sh@33 -- # return 0 00:03:33.091 nr_hugepages=512 00:03:33.091 resv_hugepages=0 00:03:33.091 14:04:31 -- setup/hugepages.sh@100 -- # resv=0 00:03:33.091 14:04:31 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:33.091 14:04:31 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:33.091 surplus_hugepages=0 00:03:33.091 anon_hugepages=0 00:03:33.091 14:04:31 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:33.091 14:04:31 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:33.091 14:04:31 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:33.091 14:04:31 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:33.091 14:04:31 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:33.091 14:04:31 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:33.091 14:04:31 -- setup/common.sh@18 -- # local node= 00:03:33.091 14:04:31 -- setup/common.sh@19 -- # local var val 00:03:33.091 14:04:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.091 14:04:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.091 14:04:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.091 14:04:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.091 14:04:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.091 14:04:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 14:04:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8965416 kB' 'MemAvailable: 10534636 kB' 'Buffers: 3704 kB' 'Cached: 1781288 kB' 'SwapCached: 0 kB' 'Active: 471004 kB' 'Inactive: 1431616 kB' 'Active(anon): 128120 kB' 'Inactive(anon): 0 kB' 'Active(file): 342884 kB' 'Inactive(file): 1431616 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119212 kB' 'Mapped: 53584 kB' 'Shmem: 10492 kB' 'KReclaimable: 63296 kB' 'Slab: 162484 kB' 'SReclaimable: 63296 kB' 'SUnreclaim: 99188 kB' 'KernelStack: 6624 kB' 'PageTables: 4084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 327480 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 14:04:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 14:04:31 -- setup/common.sh@33 -- # echo 512 00:03:33.092 14:04:31 -- setup/common.sh@33 -- # return 0 00:03:33.092 14:04:31 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:33.092 14:04:31 -- setup/hugepages.sh@112 -- # get_nodes 00:03:33.092 14:04:31 -- setup/hugepages.sh@27 -- # local node 00:03:33.092 14:04:31 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.092 14:04:31 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:33.092 14:04:31 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:33.092 14:04:31 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:33.092 14:04:31 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:33.092 14:04:31 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:33.092 14:04:31 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:33.092 14:04:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.092 14:04:31 -- setup/common.sh@18 -- # local node=0 00:03:33.092 14:04:31 -- setup/common.sh@19 -- # local var val 00:03:33.092 14:04:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.092 14:04:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.092 14:04:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:33.092 14:04:31 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:33.092 14:04:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.092 14:04:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 14:04:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8965416 kB' 'MemUsed: 3271680 kB' 'SwapCached: 0 kB' 'Active: 470736 kB' 'Inactive: 1431616 kB' 'Active(anon): 127852 kB' 'Inactive(anon): 0 kB' 'Active(file): 342884 kB' 'Inactive(file): 1431616 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'FilePages: 1784992 kB' 'Mapped: 53584 kB' 'AnonPages: 118928 kB' 'Shmem: 10492 kB' 'KernelStack: 6608 kB' 'PageTables: 4036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63296 kB' 'Slab: 162484 kB' 'SReclaimable: 63296 kB' 'SUnreclaim: 99188 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.092 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # continue 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 14:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 14:04:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.093 14:04:31 -- setup/common.sh@33 -- # echo 0 00:03:33.093 14:04:31 -- setup/common.sh@33 -- # return 0 00:03:33.093 14:04:31 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:33.093 14:04:31 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:33.093 14:04:31 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:33.093 14:04:31 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:33.093 node0=512 expecting 512 00:03:33.093 14:04:31 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:33.093 14:04:31 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:33.093 00:03:33.093 real 0m0.636s 00:03:33.093 user 0m0.243s 00:03:33.093 sys 0m0.413s 00:03:33.093 ************************************ 00:03:33.093 END TEST per_node_1G_alloc 00:03:33.093 ************************************ 00:03:33.093 14:04:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:33.093 14:04:31 -- common/autotest_common.sh@10 -- # set +x 00:03:33.093 14:04:31 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:33.093 14:04:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:33.093 14:04:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:33.093 14:04:31 -- common/autotest_common.sh@10 -- # set +x 00:03:33.093 ************************************ 00:03:33.093 START TEST even_2G_alloc 00:03:33.093 ************************************ 00:03:33.093 14:04:31 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:03:33.093 14:04:31 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:33.093 14:04:31 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:33.093 14:04:31 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:33.093 14:04:31 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:33.093 14:04:31 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:33.094 14:04:31 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:33.094 14:04:31 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:33.094 14:04:31 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:33.094 14:04:31 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:33.094 14:04:31 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:33.094 14:04:31 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:33.094 14:04:31 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:33.094 14:04:31 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:33.094 14:04:31 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:33.094 14:04:31 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:33.094 14:04:31 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:33.094 14:04:31 -- setup/hugepages.sh@83 -- # : 0 00:03:33.094 14:04:31 -- setup/hugepages.sh@84 -- # : 0 00:03:33.094 14:04:31 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:33.094 14:04:31 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:33.094 14:04:31 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:33.094 14:04:31 -- setup/hugepages.sh@153 -- # setup output 00:03:33.094 14:04:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.094 14:04:31 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:33.671 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:33.671 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:33.671 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:33.671 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:33.671 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:33.671 14:04:32 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:33.671 14:04:32 -- setup/hugepages.sh@89 -- # local node 00:03:33.671 14:04:32 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:33.671 14:04:32 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:33.671 14:04:32 -- setup/hugepages.sh@92 -- # local surp 00:03:33.671 14:04:32 -- setup/hugepages.sh@93 -- # local resv 00:03:33.671 14:04:32 -- setup/hugepages.sh@94 -- # local anon 00:03:33.671 14:04:32 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:33.671 14:04:32 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:33.671 14:04:32 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:33.671 14:04:32 -- setup/common.sh@18 -- # local node= 00:03:33.671 14:04:32 -- setup/common.sh@19 -- # local var val 00:03:33.671 14:04:32 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.671 14:04:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.671 14:04:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.671 14:04:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.671 14:04:32 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.671 14:04:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.671 14:04:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7917712 kB' 'MemAvailable: 9486936 kB' 'Buffers: 3704 kB' 'Cached: 1781292 kB' 'SwapCached: 0 kB' 'Active: 471896 kB' 'Inactive: 1431620 kB' 'Active(anon): 129012 kB' 'Inactive(anon): 0 kB' 'Active(file): 342884 kB' 'Inactive(file): 1431620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119900 kB' 'Mapped: 53696 kB' 'Shmem: 10492 kB' 'KReclaimable: 63296 kB' 'Slab: 162496 kB' 'SReclaimable: 63296 kB' 'SUnreclaim: 99200 kB' 'KernelStack: 6684 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 327480 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55688 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:03:33.671 14:04:32 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.671 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.671 14:04:32 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.671 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.671 14:04:32 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.671 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.671 14:04:32 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.671 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.671 14:04:32 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.671 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.671 14:04:32 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.671 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.671 14:04:32 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.671 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.671 14:04:32 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.671 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.671 14:04:32 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.671 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.671 14:04:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.671 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.671 14:04:32 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.671 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.671 14:04:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.671 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.671 14:04:32 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.671 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.671 14:04:32 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.671 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.671 14:04:32 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.671 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.671 14:04:32 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.671 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.671 14:04:32 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.671 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.671 14:04:32 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.671 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.671 14:04:32 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.671 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.671 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.672 14:04:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.672 14:04:32 -- setup/common.sh@33 -- # echo 0 00:03:33.672 14:04:32 -- setup/common.sh@33 -- # return 0 00:03:33.672 14:04:32 -- setup/hugepages.sh@97 -- # anon=0 00:03:33.672 14:04:32 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:33.672 14:04:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.672 14:04:32 -- setup/common.sh@18 -- # local node= 00:03:33.672 14:04:32 -- setup/common.sh@19 -- # local var val 00:03:33.672 14:04:32 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.672 14:04:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.672 14:04:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.672 14:04:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.672 14:04:32 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.672 14:04:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.672 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.672 14:04:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7918692 kB' 'MemAvailable: 9487916 kB' 'Buffers: 3704 kB' 'Cached: 1781292 kB' 'SwapCached: 0 kB' 'Active: 471140 kB' 'Inactive: 1431620 kB' 'Active(anon): 128256 kB' 'Inactive(anon): 0 kB' 'Active(file): 342884 kB' 'Inactive(file): 1431620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119304 kB' 'Mapped: 53532 kB' 'Shmem: 10492 kB' 'KReclaimable: 63296 kB' 'Slab: 162560 kB' 'SReclaimable: 63296 kB' 'SUnreclaim: 99264 kB' 'KernelStack: 6592 kB' 'PageTables: 3996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 327480 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55656 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.673 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.673 14:04:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.674 14:04:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.674 14:04:32 -- setup/common.sh@33 -- # echo 0 00:03:33.674 14:04:32 -- setup/common.sh@33 -- # return 0 00:03:33.674 14:04:32 -- setup/hugepages.sh@99 -- # surp=0 00:03:33.674 14:04:32 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:33.674 14:04:32 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:33.674 14:04:32 -- setup/common.sh@18 -- # local node= 00:03:33.674 14:04:32 -- setup/common.sh@19 -- # local var val 00:03:33.674 14:04:32 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.674 14:04:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.674 14:04:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.674 14:04:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.674 14:04:32 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.674 14:04:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.674 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.674 14:04:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7918692 kB' 'MemAvailable: 9487916 kB' 'Buffers: 3704 kB' 'Cached: 1781292 kB' 'SwapCached: 0 kB' 'Active: 471144 kB' 'Inactive: 1431620 kB' 'Active(anon): 128260 kB' 'Inactive(anon): 0 kB' 'Active(file): 342884 kB' 'Inactive(file): 1431620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119340 kB' 'Mapped: 53532 kB' 'Shmem: 10492 kB' 'KReclaimable: 63292 kB' 'Slab: 162556 kB' 'SReclaimable: 63292 kB' 'SUnreclaim: 99264 kB' 'KernelStack: 6608 kB' 'PageTables: 4044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 327480 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.675 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.675 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.676 14:04:32 -- setup/common.sh@33 -- # echo 0 00:03:33.676 14:04:32 -- setup/common.sh@33 -- # return 0 00:03:33.676 14:04:32 -- setup/hugepages.sh@100 -- # resv=0 00:03:33.676 nr_hugepages=1024 00:03:33.676 14:04:32 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:33.676 resv_hugepages=0 00:03:33.676 14:04:32 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:33.676 surplus_hugepages=0 00:03:33.676 anon_hugepages=0 00:03:33.676 14:04:32 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:33.676 14:04:32 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:33.676 14:04:32 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:33.676 14:04:32 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:33.676 14:04:32 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:33.676 14:04:32 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:33.676 14:04:32 -- setup/common.sh@18 -- # local node= 00:03:33.676 14:04:32 -- setup/common.sh@19 -- # local var val 00:03:33.676 14:04:32 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.676 14:04:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.676 14:04:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.676 14:04:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.676 14:04:32 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.676 14:04:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.676 14:04:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7918944 kB' 'MemAvailable: 9488168 kB' 'Buffers: 3704 kB' 'Cached: 1781292 kB' 'SwapCached: 0 kB' 'Active: 471104 kB' 'Inactive: 1431620 kB' 'Active(anon): 128220 kB' 'Inactive(anon): 0 kB' 'Active(file): 342884 kB' 'Inactive(file): 1431620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119300 kB' 'Mapped: 53532 kB' 'Shmem: 10492 kB' 'KReclaimable: 63292 kB' 'Slab: 162556 kB' 'SReclaimable: 63292 kB' 'SUnreclaim: 99264 kB' 'KernelStack: 6592 kB' 'PageTables: 3996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 327480 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.676 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.676 14:04:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.677 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.677 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.678 14:04:32 -- setup/common.sh@33 -- # echo 1024 00:03:33.678 14:04:32 -- setup/common.sh@33 -- # return 0 00:03:33.678 14:04:32 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:33.678 14:04:32 -- setup/hugepages.sh@112 -- # get_nodes 00:03:33.678 14:04:32 -- setup/hugepages.sh@27 -- # local node 00:03:33.678 14:04:32 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.678 14:04:32 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:33.678 14:04:32 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:33.678 14:04:32 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:33.678 14:04:32 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:33.678 14:04:32 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:33.678 14:04:32 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:33.678 14:04:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.678 14:04:32 -- setup/common.sh@18 -- # local node=0 00:03:33.678 14:04:32 -- setup/common.sh@19 -- # local var val 00:03:33.678 14:04:32 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.678 14:04:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.678 14:04:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:33.678 14:04:32 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:33.678 14:04:32 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.678 14:04:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.678 14:04:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7918944 kB' 'MemUsed: 4318152 kB' 'SwapCached: 0 kB' 'Active: 470944 kB' 'Inactive: 1431620 kB' 'Active(anon): 128060 kB' 'Inactive(anon): 0 kB' 'Active(file): 342884 kB' 'Inactive(file): 1431620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'FilePages: 1784996 kB' 'Mapped: 53532 kB' 'AnonPages: 119136 kB' 'Shmem: 10492 kB' 'KernelStack: 6560 kB' 'PageTables: 3896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63292 kB' 'Slab: 162552 kB' 'SReclaimable: 63292 kB' 'SUnreclaim: 99260 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.678 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.678 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # continue 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.679 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.679 14:04:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.679 14:04:32 -- setup/common.sh@33 -- # echo 0 00:03:33.679 14:04:32 -- setup/common.sh@33 -- # return 0 00:03:33.679 14:04:32 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:33.679 14:04:32 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:33.679 node0=1024 expecting 1024 00:03:33.679 14:04:32 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:33.679 14:04:32 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:33.679 14:04:32 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:33.679 14:04:32 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:33.679 00:03:33.679 real 0m0.636s 00:03:33.679 user 0m0.261s 00:03:33.679 sys 0m0.396s 00:03:33.679 14:04:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:33.679 ************************************ 00:03:33.679 END TEST even_2G_alloc 00:03:33.679 ************************************ 00:03:33.679 14:04:32 -- common/autotest_common.sh@10 -- # set +x 00:03:33.941 14:04:32 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:33.941 14:04:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:33.941 14:04:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:33.941 14:04:32 -- common/autotest_common.sh@10 -- # set +x 00:03:33.941 ************************************ 00:03:33.941 START TEST odd_alloc 00:03:33.941 ************************************ 00:03:33.941 14:04:32 -- common/autotest_common.sh@1114 -- # odd_alloc 00:03:33.941 14:04:32 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:33.941 14:04:32 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:33.941 14:04:32 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:33.941 14:04:32 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:33.941 14:04:32 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:33.941 14:04:32 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:33.941 14:04:32 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:33.941 14:04:32 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:33.941 14:04:32 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:33.941 14:04:32 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:33.941 14:04:32 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:33.941 14:04:32 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:33.941 14:04:32 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:33.941 14:04:32 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:33.941 14:04:32 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:33.941 14:04:32 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:03:33.941 14:04:32 -- setup/hugepages.sh@83 -- # : 0 00:03:33.941 14:04:32 -- setup/hugepages.sh@84 -- # : 0 00:03:33.941 14:04:32 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:33.941 14:04:32 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:33.941 14:04:32 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:33.941 14:04:32 -- setup/hugepages.sh@160 -- # setup output 00:03:33.941 14:04:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.941 14:04:32 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:34.203 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:34.468 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:34.468 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:34.468 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:34.468 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:34.468 14:04:32 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:34.468 14:04:32 -- setup/hugepages.sh@89 -- # local node 00:03:34.468 14:04:32 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:34.468 14:04:32 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:34.468 14:04:32 -- setup/hugepages.sh@92 -- # local surp 00:03:34.468 14:04:32 -- setup/hugepages.sh@93 -- # local resv 00:03:34.468 14:04:32 -- setup/hugepages.sh@94 -- # local anon 00:03:34.468 14:04:32 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:34.468 14:04:32 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:34.468 14:04:32 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:34.468 14:04:32 -- setup/common.sh@18 -- # local node= 00:03:34.468 14:04:32 -- setup/common.sh@19 -- # local var val 00:03:34.468 14:04:32 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.468 14:04:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.468 14:04:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.468 14:04:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.468 14:04:32 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.468 14:04:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.468 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.468 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.468 14:04:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7917216 kB' 'MemAvailable: 9486440 kB' 'Buffers: 3704 kB' 'Cached: 1781292 kB' 'SwapCached: 0 kB' 'Active: 471204 kB' 'Inactive: 1431620 kB' 'Active(anon): 128320 kB' 'Inactive(anon): 0 kB' 'Active(file): 342884 kB' 'Inactive(file): 1431620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119364 kB' 'Mapped: 53644 kB' 'Shmem: 10492 kB' 'KReclaimable: 63292 kB' 'Slab: 162580 kB' 'SReclaimable: 63292 kB' 'SUnreclaim: 99288 kB' 'KernelStack: 6608 kB' 'PageTables: 4040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 327480 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:03:34.468 14:04:32 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.468 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.468 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.468 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.468 14:04:32 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.469 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.469 14:04:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.469 14:04:32 -- setup/common.sh@33 -- # echo 0 00:03:34.469 14:04:32 -- setup/common.sh@33 -- # return 0 00:03:34.469 14:04:32 -- setup/hugepages.sh@97 -- # anon=0 00:03:34.469 14:04:32 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:34.469 14:04:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.469 14:04:32 -- setup/common.sh@18 -- # local node= 00:03:34.470 14:04:32 -- setup/common.sh@19 -- # local var val 00:03:34.470 14:04:32 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.470 14:04:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.470 14:04:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.470 14:04:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.470 14:04:32 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.470 14:04:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.470 14:04:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7916584 kB' 'MemAvailable: 9485808 kB' 'Buffers: 3704 kB' 'Cached: 1781292 kB' 'SwapCached: 0 kB' 'Active: 471112 kB' 'Inactive: 1431620 kB' 'Active(anon): 128228 kB' 'Inactive(anon): 0 kB' 'Active(file): 342884 kB' 'Inactive(file): 1431620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119332 kB' 'Mapped: 53536 kB' 'Shmem: 10492 kB' 'KReclaimable: 63292 kB' 'Slab: 162584 kB' 'SReclaimable: 63292 kB' 'SUnreclaim: 99292 kB' 'KernelStack: 6592 kB' 'PageTables: 3992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 327480 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55656 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.470 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.470 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.471 14:04:32 -- setup/common.sh@33 -- # echo 0 00:03:34.471 14:04:32 -- setup/common.sh@33 -- # return 0 00:03:34.471 14:04:32 -- setup/hugepages.sh@99 -- # surp=0 00:03:34.471 14:04:32 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:34.471 14:04:32 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:34.471 14:04:32 -- setup/common.sh@18 -- # local node= 00:03:34.471 14:04:32 -- setup/common.sh@19 -- # local var val 00:03:34.471 14:04:32 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.471 14:04:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.471 14:04:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.471 14:04:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.471 14:04:32 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.471 14:04:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.471 14:04:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7916584 kB' 'MemAvailable: 9485808 kB' 'Buffers: 3704 kB' 'Cached: 1781292 kB' 'SwapCached: 0 kB' 'Active: 471080 kB' 'Inactive: 1431620 kB' 'Active(anon): 128196 kB' 'Inactive(anon): 0 kB' 'Active(file): 342884 kB' 'Inactive(file): 1431620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119292 kB' 'Mapped: 53536 kB' 'Shmem: 10492 kB' 'KReclaimable: 63292 kB' 'Slab: 162572 kB' 'SReclaimable: 63292 kB' 'SUnreclaim: 99280 kB' 'KernelStack: 6576 kB' 'PageTables: 3944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 327480 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55656 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.471 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.471 14:04:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.472 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.472 14:04:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.472 14:04:32 -- setup/common.sh@33 -- # echo 0 00:03:34.472 14:04:32 -- setup/common.sh@33 -- # return 0 00:03:34.472 14:04:32 -- setup/hugepages.sh@100 -- # resv=0 00:03:34.472 nr_hugepages=1025 00:03:34.472 resv_hugepages=0 00:03:34.472 14:04:32 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:34.472 14:04:32 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:34.472 surplus_hugepages=0 00:03:34.472 anon_hugepages=0 00:03:34.472 14:04:32 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:34.472 14:04:32 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:34.472 14:04:32 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:34.472 14:04:32 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:34.473 14:04:32 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:34.473 14:04:32 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:34.473 14:04:32 -- setup/common.sh@18 -- # local node= 00:03:34.473 14:04:32 -- setup/common.sh@19 -- # local var val 00:03:34.473 14:04:32 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.473 14:04:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.473 14:04:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.473 14:04:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.473 14:04:32 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.473 14:04:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.473 14:04:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7917192 kB' 'MemAvailable: 9486416 kB' 'Buffers: 3704 kB' 'Cached: 1781292 kB' 'SwapCached: 0 kB' 'Active: 471044 kB' 'Inactive: 1431620 kB' 'Active(anon): 128160 kB' 'Inactive(anon): 0 kB' 'Active(file): 342884 kB' 'Inactive(file): 1431620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119260 kB' 'Mapped: 53536 kB' 'Shmem: 10492 kB' 'KReclaimable: 63292 kB' 'Slab: 162564 kB' 'SReclaimable: 63292 kB' 'SUnreclaim: 99272 kB' 'KernelStack: 6560 kB' 'PageTables: 3896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 327480 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55656 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.473 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.473 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.474 14:04:32 -- setup/common.sh@33 -- # echo 1025 00:03:34.474 14:04:32 -- setup/common.sh@33 -- # return 0 00:03:34.474 14:04:32 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:34.474 14:04:32 -- setup/hugepages.sh@112 -- # get_nodes 00:03:34.474 14:04:32 -- setup/hugepages.sh@27 -- # local node 00:03:34.474 14:04:32 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.474 14:04:32 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:03:34.474 14:04:32 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:34.474 14:04:32 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:34.474 14:04:32 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.474 14:04:32 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.474 14:04:32 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:34.474 14:04:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.474 14:04:32 -- setup/common.sh@18 -- # local node=0 00:03:34.474 14:04:32 -- setup/common.sh@19 -- # local var val 00:03:34.474 14:04:32 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.474 14:04:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.474 14:04:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:34.474 14:04:32 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:34.474 14:04:32 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.474 14:04:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.474 14:04:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7917192 kB' 'MemUsed: 4319904 kB' 'SwapCached: 0 kB' 'Active: 471120 kB' 'Inactive: 1431620 kB' 'Active(anon): 128236 kB' 'Inactive(anon): 0 kB' 'Active(file): 342884 kB' 'Inactive(file): 1431620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'FilePages: 1784996 kB' 'Mapped: 53536 kB' 'AnonPages: 119280 kB' 'Shmem: 10492 kB' 'KernelStack: 6628 kB' 'PageTables: 3896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63292 kB' 'Slab: 162564 kB' 'SReclaimable: 63292 kB' 'SUnreclaim: 99272 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.474 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.474 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # continue 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.475 14:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.475 14:04:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.475 14:04:32 -- setup/common.sh@33 -- # echo 0 00:03:34.475 14:04:32 -- setup/common.sh@33 -- # return 0 00:03:34.475 14:04:32 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.475 14:04:32 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.475 14:04:32 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.475 14:04:32 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.475 node0=1025 expecting 1025 00:03:34.475 14:04:32 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:03:34.475 14:04:32 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:03:34.475 00:03:34.475 real 0m0.657s 00:03:34.475 user 0m0.253s 00:03:34.475 sys 0m0.430s 00:03:34.475 14:04:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:34.475 ************************************ 00:03:34.475 END TEST odd_alloc 00:03:34.475 ************************************ 00:03:34.475 14:04:32 -- common/autotest_common.sh@10 -- # set +x 00:03:34.475 14:04:32 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:34.475 14:04:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:34.475 14:04:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:34.475 14:04:32 -- common/autotest_common.sh@10 -- # set +x 00:03:34.475 ************************************ 00:03:34.475 START TEST custom_alloc 00:03:34.475 ************************************ 00:03:34.475 14:04:32 -- common/autotest_common.sh@1114 -- # custom_alloc 00:03:34.475 14:04:32 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:34.475 14:04:32 -- setup/hugepages.sh@169 -- # local node 00:03:34.475 14:04:32 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:34.475 14:04:32 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:34.475 14:04:32 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:34.475 14:04:32 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:34.475 14:04:32 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:34.475 14:04:32 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:34.475 14:04:32 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:34.475 14:04:32 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:34.475 14:04:32 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:34.475 14:04:32 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:34.475 14:04:32 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:34.475 14:04:32 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:34.475 14:04:32 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:34.475 14:04:32 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:34.475 14:04:32 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:34.475 14:04:32 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:34.475 14:04:32 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:34.475 14:04:32 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:34.475 14:04:32 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:34.475 14:04:32 -- setup/hugepages.sh@83 -- # : 0 00:03:34.475 14:04:32 -- setup/hugepages.sh@84 -- # : 0 00:03:34.475 14:04:32 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:34.476 14:04:32 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:34.476 14:04:32 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:03:34.476 14:04:32 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:34.476 14:04:32 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:34.476 14:04:32 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:34.476 14:04:32 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:34.476 14:04:32 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:34.476 14:04:32 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:34.476 14:04:32 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:34.476 14:04:32 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:34.476 14:04:32 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:34.476 14:04:32 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:34.476 14:04:32 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:34.476 14:04:32 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:34.476 14:04:32 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:34.476 14:04:32 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:34.476 14:04:32 -- setup/hugepages.sh@78 -- # return 0 00:03:34.476 14:04:32 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:03:34.476 14:04:32 -- setup/hugepages.sh@187 -- # setup output 00:03:34.476 14:04:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.476 14:04:32 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:35.052 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:35.052 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:35.052 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:35.053 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:35.053 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:35.053 14:04:33 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:03:35.053 14:04:33 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:35.053 14:04:33 -- setup/hugepages.sh@89 -- # local node 00:03:35.053 14:04:33 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:35.053 14:04:33 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:35.053 14:04:33 -- setup/hugepages.sh@92 -- # local surp 00:03:35.053 14:04:33 -- setup/hugepages.sh@93 -- # local resv 00:03:35.053 14:04:33 -- setup/hugepages.sh@94 -- # local anon 00:03:35.053 14:04:33 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:35.053 14:04:33 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:35.053 14:04:33 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:35.053 14:04:33 -- setup/common.sh@18 -- # local node= 00:03:35.053 14:04:33 -- setup/common.sh@19 -- # local var val 00:03:35.053 14:04:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.053 14:04:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.053 14:04:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.053 14:04:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.053 14:04:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.053 14:04:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.053 14:04:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8960772 kB' 'MemAvailable: 10529996 kB' 'Buffers: 3704 kB' 'Cached: 1781292 kB' 'SwapCached: 0 kB' 'Active: 471840 kB' 'Inactive: 1431620 kB' 'Active(anon): 128956 kB' 'Inactive(anon): 0 kB' 'Active(file): 342884 kB' 'Inactive(file): 1431620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119776 kB' 'Mapped: 53652 kB' 'Shmem: 10492 kB' 'KReclaimable: 63292 kB' 'Slab: 162572 kB' 'SReclaimable: 63292 kB' 'SUnreclaim: 99280 kB' 'KernelStack: 6636 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 327480 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55704 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.053 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.053 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.054 14:04:33 -- setup/common.sh@33 -- # echo 0 00:03:35.054 14:04:33 -- setup/common.sh@33 -- # return 0 00:03:35.054 14:04:33 -- setup/hugepages.sh@97 -- # anon=0 00:03:35.054 14:04:33 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:35.054 14:04:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.054 14:04:33 -- setup/common.sh@18 -- # local node= 00:03:35.054 14:04:33 -- setup/common.sh@19 -- # local var val 00:03:35.054 14:04:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.054 14:04:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.054 14:04:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.054 14:04:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.054 14:04:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.054 14:04:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.054 14:04:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8960812 kB' 'MemAvailable: 10530036 kB' 'Buffers: 3704 kB' 'Cached: 1781292 kB' 'SwapCached: 0 kB' 'Active: 471676 kB' 'Inactive: 1431620 kB' 'Active(anon): 128792 kB' 'Inactive(anon): 0 kB' 'Active(file): 342884 kB' 'Inactive(file): 1431620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119872 kB' 'Mapped: 53648 kB' 'Shmem: 10492 kB' 'KReclaimable: 63292 kB' 'Slab: 162556 kB' 'SReclaimable: 63292 kB' 'SUnreclaim: 99264 kB' 'KernelStack: 6620 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 327480 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.054 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.054 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.055 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.055 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.056 14:04:33 -- setup/common.sh@33 -- # echo 0 00:03:35.056 14:04:33 -- setup/common.sh@33 -- # return 0 00:03:35.056 14:04:33 -- setup/hugepages.sh@99 -- # surp=0 00:03:35.056 14:04:33 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:35.056 14:04:33 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:35.056 14:04:33 -- setup/common.sh@18 -- # local node= 00:03:35.056 14:04:33 -- setup/common.sh@19 -- # local var val 00:03:35.056 14:04:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.056 14:04:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.056 14:04:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.056 14:04:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.056 14:04:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.056 14:04:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.056 14:04:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8961068 kB' 'MemAvailable: 10530292 kB' 'Buffers: 3704 kB' 'Cached: 1781292 kB' 'SwapCached: 0 kB' 'Active: 471112 kB' 'Inactive: 1431620 kB' 'Active(anon): 128228 kB' 'Inactive(anon): 0 kB' 'Active(file): 342884 kB' 'Inactive(file): 1431620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119272 kB' 'Mapped: 53612 kB' 'Shmem: 10492 kB' 'KReclaimable: 63292 kB' 'Slab: 162568 kB' 'SReclaimable: 63292 kB' 'SUnreclaim: 99276 kB' 'KernelStack: 6576 kB' 'PageTables: 3940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 327480 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55688 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.056 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.056 14:04:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.057 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.057 14:04:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.058 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.058 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.058 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.058 14:04:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.058 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.058 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.058 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.058 14:04:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.058 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.058 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.058 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.058 14:04:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.058 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.058 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.058 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.058 14:04:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.058 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.058 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.058 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.058 14:04:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.058 14:04:33 -- setup/common.sh@33 -- # echo 0 00:03:35.058 14:04:33 -- setup/common.sh@33 -- # return 0 00:03:35.058 14:04:33 -- setup/hugepages.sh@100 -- # resv=0 00:03:35.058 nr_hugepages=512 00:03:35.058 14:04:33 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:35.058 resv_hugepages=0 00:03:35.058 14:04:33 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:35.058 surplus_hugepages=0 00:03:35.058 14:04:33 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:35.058 anon_hugepages=0 00:03:35.058 14:04:33 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:35.058 14:04:33 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:35.058 14:04:33 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:35.058 14:04:33 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:35.058 14:04:33 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:35.058 14:04:33 -- setup/common.sh@18 -- # local node= 00:03:35.058 14:04:33 -- setup/common.sh@19 -- # local var val 00:03:35.058 14:04:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.058 14:04:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.058 14:04:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.058 14:04:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.058 14:04:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.058 14:04:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.058 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.058 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.058 14:04:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8961068 kB' 'MemAvailable: 10530292 kB' 'Buffers: 3704 kB' 'Cached: 1781292 kB' 'SwapCached: 0 kB' 'Active: 471372 kB' 'Inactive: 1431620 kB' 'Active(anon): 128488 kB' 'Inactive(anon): 0 kB' 'Active(file): 342884 kB' 'Inactive(file): 1431620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119532 kB' 'Mapped: 53612 kB' 'Shmem: 10492 kB' 'KReclaimable: 63292 kB' 'Slab: 162568 kB' 'SReclaimable: 63292 kB' 'SUnreclaim: 99276 kB' 'KernelStack: 6576 kB' 'PageTables: 3940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 327480 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55688 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:03:35.058 14:04:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.058 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.058 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.058 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.058 14:04:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.058 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.058 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.058 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.058 14:04:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.058 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.058 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.058 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.058 14:04:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.058 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.058 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.058 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.058 14:04:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.058 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.058 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.058 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.058 14:04:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.058 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.058 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.058 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.058 14:04:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.058 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.058 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.058 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.058 14:04:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.058 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.058 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.058 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.058 14:04:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.058 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.058 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.058 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.058 14:04:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.059 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.059 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.060 14:04:33 -- setup/common.sh@33 -- # echo 512 00:03:35.060 14:04:33 -- setup/common.sh@33 -- # return 0 00:03:35.060 14:04:33 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:35.060 14:04:33 -- setup/hugepages.sh@112 -- # get_nodes 00:03:35.060 14:04:33 -- setup/hugepages.sh@27 -- # local node 00:03:35.060 14:04:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.060 14:04:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:35.060 14:04:33 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:35.060 14:04:33 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:35.060 14:04:33 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:35.060 14:04:33 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:35.060 14:04:33 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:35.060 14:04:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.060 14:04:33 -- setup/common.sh@18 -- # local node=0 00:03:35.060 14:04:33 -- setup/common.sh@19 -- # local var val 00:03:35.060 14:04:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.060 14:04:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.060 14:04:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:35.060 14:04:33 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:35.060 14:04:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.060 14:04:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.060 14:04:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8961068 kB' 'MemUsed: 3276028 kB' 'SwapCached: 0 kB' 'Active: 471300 kB' 'Inactive: 1431620 kB' 'Active(anon): 128416 kB' 'Inactive(anon): 0 kB' 'Active(file): 342884 kB' 'Inactive(file): 1431620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 1784996 kB' 'Mapped: 53612 kB' 'AnonPages: 119460 kB' 'Shmem: 10492 kB' 'KernelStack: 6544 kB' 'PageTables: 3844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63292 kB' 'Slab: 162564 kB' 'SReclaimable: 63292 kB' 'SUnreclaim: 99272 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.060 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.060 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # continue 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.061 14:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.061 14:04:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.061 14:04:33 -- setup/common.sh@33 -- # echo 0 00:03:35.061 14:04:33 -- setup/common.sh@33 -- # return 0 00:03:35.061 14:04:33 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:35.061 14:04:33 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:35.061 14:04:33 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:35.061 14:04:33 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:35.061 14:04:33 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:35.061 node0=512 expecting 512 00:03:35.061 14:04:33 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:35.061 00:03:35.061 real 0m0.587s 00:03:35.061 user 0m0.243s 00:03:35.061 sys 0m0.367s 00:03:35.061 ************************************ 00:03:35.061 14:04:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:35.061 14:04:33 -- common/autotest_common.sh@10 -- # set +x 00:03:35.061 END TEST custom_alloc 00:03:35.061 ************************************ 00:03:35.323 14:04:33 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:35.323 14:04:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:35.323 14:04:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:35.323 14:04:33 -- common/autotest_common.sh@10 -- # set +x 00:03:35.323 ************************************ 00:03:35.323 START TEST no_shrink_alloc 00:03:35.323 ************************************ 00:03:35.323 14:04:33 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:03:35.323 14:04:33 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:35.323 14:04:33 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:35.323 14:04:33 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:35.323 14:04:33 -- setup/hugepages.sh@51 -- # shift 00:03:35.323 14:04:33 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:35.323 14:04:33 -- setup/hugepages.sh@52 -- # local node_ids 00:03:35.323 14:04:33 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:35.323 14:04:33 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:35.323 14:04:33 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:35.323 14:04:33 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:35.323 14:04:33 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:35.323 14:04:33 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:35.323 14:04:33 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:35.323 14:04:33 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:35.323 14:04:33 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:35.323 14:04:33 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:35.323 14:04:33 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:35.323 14:04:33 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:35.323 14:04:33 -- setup/hugepages.sh@73 -- # return 0 00:03:35.323 14:04:33 -- setup/hugepages.sh@198 -- # setup output 00:03:35.323 14:04:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.323 14:04:33 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:35.585 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:35.585 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:35.585 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:35.585 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:35.585 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:35.585 14:04:34 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:35.585 14:04:34 -- setup/hugepages.sh@89 -- # local node 00:03:35.585 14:04:34 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:35.585 14:04:34 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:35.585 14:04:34 -- setup/hugepages.sh@92 -- # local surp 00:03:35.585 14:04:34 -- setup/hugepages.sh@93 -- # local resv 00:03:35.585 14:04:34 -- setup/hugepages.sh@94 -- # local anon 00:03:35.585 14:04:34 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:35.585 14:04:34 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:35.585 14:04:34 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:35.585 14:04:34 -- setup/common.sh@18 -- # local node= 00:03:35.585 14:04:34 -- setup/common.sh@19 -- # local var val 00:03:35.585 14:04:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.585 14:04:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.585 14:04:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.585 14:04:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.585 14:04:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.585 14:04:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.850 14:04:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7918128 kB' 'MemAvailable: 9487344 kB' 'Buffers: 3704 kB' 'Cached: 1781292 kB' 'SwapCached: 0 kB' 'Active: 469700 kB' 'Inactive: 1431620 kB' 'Active(anon): 126816 kB' 'Inactive(anon): 0 kB' 'Active(file): 342884 kB' 'Inactive(file): 1431620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 117932 kB' 'Mapped: 52800 kB' 'Shmem: 10492 kB' 'KReclaimable: 63276 kB' 'Slab: 162384 kB' 'SReclaimable: 63276 kB' 'SUnreclaim: 99108 kB' 'KernelStack: 6540 kB' 'PageTables: 3744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 313900 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55592 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.850 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.850 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.851 14:04:34 -- setup/common.sh@33 -- # echo 0 00:03:35.851 14:04:34 -- setup/common.sh@33 -- # return 0 00:03:35.851 14:04:34 -- setup/hugepages.sh@97 -- # anon=0 00:03:35.851 14:04:34 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:35.851 14:04:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.851 14:04:34 -- setup/common.sh@18 -- # local node= 00:03:35.851 14:04:34 -- setup/common.sh@19 -- # local var val 00:03:35.851 14:04:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.851 14:04:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.851 14:04:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.851 14:04:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.851 14:04:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.851 14:04:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.851 14:04:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7918184 kB' 'MemAvailable: 9487400 kB' 'Buffers: 3704 kB' 'Cached: 1781292 kB' 'SwapCached: 0 kB' 'Active: 469452 kB' 'Inactive: 1431620 kB' 'Active(anon): 126568 kB' 'Inactive(anon): 0 kB' 'Active(file): 342884 kB' 'Inactive(file): 1431620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 117672 kB' 'Mapped: 52816 kB' 'Shmem: 10492 kB' 'KReclaimable: 63276 kB' 'Slab: 162384 kB' 'SReclaimable: 63276 kB' 'SUnreclaim: 99108 kB' 'KernelStack: 6492 kB' 'PageTables: 3616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 313900 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55592 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.851 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.851 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.852 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.852 14:04:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.852 14:04:34 -- setup/common.sh@33 -- # echo 0 00:03:35.852 14:04:34 -- setup/common.sh@33 -- # return 0 00:03:35.852 14:04:34 -- setup/hugepages.sh@99 -- # surp=0 00:03:35.852 14:04:34 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:35.852 14:04:34 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:35.852 14:04:34 -- setup/common.sh@18 -- # local node= 00:03:35.852 14:04:34 -- setup/common.sh@19 -- # local var val 00:03:35.852 14:04:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.853 14:04:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.853 14:04:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.853 14:04:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.853 14:04:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.853 14:04:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.853 14:04:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7918184 kB' 'MemAvailable: 9487400 kB' 'Buffers: 3704 kB' 'Cached: 1781292 kB' 'SwapCached: 0 kB' 'Active: 469664 kB' 'Inactive: 1431620 kB' 'Active(anon): 126780 kB' 'Inactive(anon): 0 kB' 'Active(file): 342884 kB' 'Inactive(file): 1431620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 117624 kB' 'Mapped: 52816 kB' 'Shmem: 10492 kB' 'KReclaimable: 63276 kB' 'Slab: 162384 kB' 'SReclaimable: 63276 kB' 'SUnreclaim: 99108 kB' 'KernelStack: 6544 kB' 'PageTables: 3568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 313900 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55592 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.853 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.853 14:04:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.854 14:04:34 -- setup/common.sh@33 -- # echo 0 00:03:35.854 14:04:34 -- setup/common.sh@33 -- # return 0 00:03:35.854 nr_hugepages=1024 00:03:35.854 14:04:34 -- setup/hugepages.sh@100 -- # resv=0 00:03:35.854 14:04:34 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:35.854 resv_hugepages=0 00:03:35.854 14:04:34 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:35.854 surplus_hugepages=0 00:03:35.854 anon_hugepages=0 00:03:35.854 14:04:34 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:35.854 14:04:34 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:35.854 14:04:34 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:35.854 14:04:34 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:35.854 14:04:34 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:35.854 14:04:34 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:35.854 14:04:34 -- setup/common.sh@18 -- # local node= 00:03:35.854 14:04:34 -- setup/common.sh@19 -- # local var val 00:03:35.854 14:04:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.854 14:04:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.854 14:04:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.854 14:04:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.854 14:04:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.854 14:04:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.854 14:04:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7918184 kB' 'MemAvailable: 9487400 kB' 'Buffers: 3704 kB' 'Cached: 1781292 kB' 'SwapCached: 0 kB' 'Active: 469304 kB' 'Inactive: 1431620 kB' 'Active(anon): 126420 kB' 'Inactive(anon): 0 kB' 'Active(file): 342884 kB' 'Inactive(file): 1431620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 117576 kB' 'Mapped: 52816 kB' 'Shmem: 10492 kB' 'KReclaimable: 63276 kB' 'Slab: 162380 kB' 'SReclaimable: 63276 kB' 'SUnreclaim: 99104 kB' 'KernelStack: 6512 kB' 'PageTables: 3732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 313900 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.854 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.854 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.855 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.855 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.856 14:04:34 -- setup/common.sh@33 -- # echo 1024 00:03:35.856 14:04:34 -- setup/common.sh@33 -- # return 0 00:03:35.856 14:04:34 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:35.856 14:04:34 -- setup/hugepages.sh@112 -- # get_nodes 00:03:35.856 14:04:34 -- setup/hugepages.sh@27 -- # local node 00:03:35.856 14:04:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.856 14:04:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:35.856 14:04:34 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:35.856 14:04:34 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:35.856 14:04:34 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:35.856 14:04:34 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:35.856 14:04:34 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:35.856 14:04:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.856 14:04:34 -- setup/common.sh@18 -- # local node=0 00:03:35.856 14:04:34 -- setup/common.sh@19 -- # local var val 00:03:35.856 14:04:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.856 14:04:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.856 14:04:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:35.856 14:04:34 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:35.856 14:04:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.856 14:04:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.856 14:04:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7918184 kB' 'MemUsed: 4318912 kB' 'SwapCached: 0 kB' 'Active: 469496 kB' 'Inactive: 1431620 kB' 'Active(anon): 126612 kB' 'Inactive(anon): 0 kB' 'Active(file): 342884 kB' 'Inactive(file): 1431620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 1784996 kB' 'Mapped: 52816 kB' 'AnonPages: 117456 kB' 'Shmem: 10492 kB' 'KernelStack: 6496 kB' 'PageTables: 3680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63276 kB' 'Slab: 162380 kB' 'SReclaimable: 63276 kB' 'SUnreclaim: 99104 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.856 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.856 14:04:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.857 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.857 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.857 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.857 14:04:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.857 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.857 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.857 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.857 14:04:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.857 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.857 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.857 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.857 14:04:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.857 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.857 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.857 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.857 14:04:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.857 14:04:34 -- setup/common.sh@32 -- # continue 00:03:35.857 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.857 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.857 14:04:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.857 14:04:34 -- setup/common.sh@33 -- # echo 0 00:03:35.857 14:04:34 -- setup/common.sh@33 -- # return 0 00:03:35.857 14:04:34 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:35.857 14:04:34 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:35.857 14:04:34 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:35.857 14:04:34 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:35.857 node0=1024 expecting 1024 00:03:35.857 14:04:34 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:35.857 14:04:34 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:35.857 14:04:34 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:35.857 14:04:34 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:35.857 14:04:34 -- setup/hugepages.sh@202 -- # setup output 00:03:35.857 14:04:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.857 14:04:34 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:36.434 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:36.434 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:36.434 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:36.434 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:36.434 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:36.434 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:36.434 14:04:34 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:36.434 14:04:34 -- setup/hugepages.sh@89 -- # local node 00:03:36.434 14:04:34 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:36.434 14:04:34 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:36.434 14:04:34 -- setup/hugepages.sh@92 -- # local surp 00:03:36.434 14:04:34 -- setup/hugepages.sh@93 -- # local resv 00:03:36.434 14:04:34 -- setup/hugepages.sh@94 -- # local anon 00:03:36.434 14:04:34 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:36.434 14:04:34 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:36.434 14:04:34 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:36.434 14:04:34 -- setup/common.sh@18 -- # local node= 00:03:36.434 14:04:34 -- setup/common.sh@19 -- # local var val 00:03:36.434 14:04:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.434 14:04:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.434 14:04:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.434 14:04:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.434 14:04:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.434 14:04:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 14:04:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7915672 kB' 'MemAvailable: 9484888 kB' 'Buffers: 3704 kB' 'Cached: 1781292 kB' 'SwapCached: 0 kB' 'Active: 470524 kB' 'Inactive: 1431620 kB' 'Active(anon): 127640 kB' 'Inactive(anon): 0 kB' 'Active(file): 342884 kB' 'Inactive(file): 1431620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 118200 kB' 'Mapped: 53312 kB' 'Shmem: 10492 kB' 'KReclaimable: 63276 kB' 'Slab: 162132 kB' 'SReclaimable: 63276 kB' 'SUnreclaim: 98856 kB' 'KernelStack: 6628 kB' 'PageTables: 4020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 313900 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55720 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.434 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.434 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.435 14:04:34 -- setup/common.sh@33 -- # echo 0 00:03:36.435 14:04:34 -- setup/common.sh@33 -- # return 0 00:03:36.435 14:04:34 -- setup/hugepages.sh@97 -- # anon=0 00:03:36.435 14:04:34 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:36.435 14:04:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.435 14:04:34 -- setup/common.sh@18 -- # local node= 00:03:36.435 14:04:34 -- setup/common.sh@19 -- # local var val 00:03:36.435 14:04:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.435 14:04:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.435 14:04:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.435 14:04:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.435 14:04:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.435 14:04:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 14:04:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7916028 kB' 'MemAvailable: 9485244 kB' 'Buffers: 3704 kB' 'Cached: 1781292 kB' 'SwapCached: 0 kB' 'Active: 468960 kB' 'Inactive: 1431620 kB' 'Active(anon): 126076 kB' 'Inactive(anon): 0 kB' 'Active(file): 342884 kB' 'Inactive(file): 1431620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 117416 kB' 'Mapped: 52684 kB' 'Shmem: 10492 kB' 'KReclaimable: 63276 kB' 'Slab: 162108 kB' 'SReclaimable: 63276 kB' 'SUnreclaim: 98832 kB' 'KernelStack: 6512 kB' 'PageTables: 3624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 313900 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55608 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.435 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.435 14:04:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.436 14:04:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.436 14:04:34 -- setup/common.sh@33 -- # echo 0 00:03:36.436 14:04:34 -- setup/common.sh@33 -- # return 0 00:03:36.436 14:04:34 -- setup/hugepages.sh@99 -- # surp=0 00:03:36.436 14:04:34 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:36.436 14:04:34 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:36.436 14:04:34 -- setup/common.sh@18 -- # local node= 00:03:36.436 14:04:34 -- setup/common.sh@19 -- # local var val 00:03:36.436 14:04:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.436 14:04:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.436 14:04:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.436 14:04:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.436 14:04:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.436 14:04:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.436 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 14:04:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7915776 kB' 'MemAvailable: 9484992 kB' 'Buffers: 3704 kB' 'Cached: 1781292 kB' 'SwapCached: 0 kB' 'Active: 468972 kB' 'Inactive: 1431620 kB' 'Active(anon): 126088 kB' 'Inactive(anon): 0 kB' 'Active(file): 342884 kB' 'Inactive(file): 1431620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 117480 kB' 'Mapped: 52684 kB' 'Shmem: 10492 kB' 'KReclaimable: 63276 kB' 'Slab: 162108 kB' 'SReclaimable: 63276 kB' 'SUnreclaim: 98832 kB' 'KernelStack: 6512 kB' 'PageTables: 3624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 313900 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55608 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.437 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.437 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.438 14:04:34 -- setup/common.sh@33 -- # echo 0 00:03:36.438 14:04:34 -- setup/common.sh@33 -- # return 0 00:03:36.438 14:04:34 -- setup/hugepages.sh@100 -- # resv=0 00:03:36.438 nr_hugepages=1024 00:03:36.438 resv_hugepages=0 00:03:36.438 14:04:34 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:36.438 14:04:34 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:36.438 surplus_hugepages=0 00:03:36.438 14:04:34 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:36.438 anon_hugepages=0 00:03:36.438 14:04:34 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:36.438 14:04:34 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:36.438 14:04:34 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:36.438 14:04:34 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:36.438 14:04:34 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:36.438 14:04:34 -- setup/common.sh@18 -- # local node= 00:03:36.438 14:04:34 -- setup/common.sh@19 -- # local var val 00:03:36.438 14:04:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.438 14:04:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.438 14:04:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.438 14:04:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.438 14:04:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.438 14:04:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.438 14:04:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7915776 kB' 'MemAvailable: 9484992 kB' 'Buffers: 3704 kB' 'Cached: 1781292 kB' 'SwapCached: 0 kB' 'Active: 469164 kB' 'Inactive: 1431620 kB' 'Active(anon): 126280 kB' 'Inactive(anon): 0 kB' 'Active(file): 342884 kB' 'Inactive(file): 1431620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 117360 kB' 'Mapped: 52684 kB' 'Shmem: 10492 kB' 'KReclaimable: 63276 kB' 'Slab: 162104 kB' 'SReclaimable: 63276 kB' 'SUnreclaim: 98828 kB' 'KernelStack: 6480 kB' 'PageTables: 3524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 313900 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55608 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 6090752 kB' 'DirectMap1G: 8388608 kB' 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.438 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.438 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.439 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.439 14:04:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.439 14:04:34 -- setup/common.sh@33 -- # echo 1024 00:03:36.439 14:04:34 -- setup/common.sh@33 -- # return 0 00:03:36.439 14:04:34 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:36.439 14:04:34 -- setup/hugepages.sh@112 -- # get_nodes 00:03:36.439 14:04:34 -- setup/hugepages.sh@27 -- # local node 00:03:36.439 14:04:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.440 14:04:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:36.440 14:04:34 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:36.440 14:04:34 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:36.440 14:04:34 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.440 14:04:34 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.440 14:04:34 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:36.440 14:04:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.440 14:04:34 -- setup/common.sh@18 -- # local node=0 00:03:36.440 14:04:34 -- setup/common.sh@19 -- # local var val 00:03:36.440 14:04:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.440 14:04:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.440 14:04:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:36.440 14:04:34 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:36.440 14:04:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.440 14:04:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.440 14:04:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7916036 kB' 'MemUsed: 4321060 kB' 'SwapCached: 0 kB' 'Active: 469200 kB' 'Inactive: 1431620 kB' 'Active(anon): 126316 kB' 'Inactive(anon): 0 kB' 'Active(file): 342884 kB' 'Inactive(file): 1431620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 1784996 kB' 'Mapped: 52684 kB' 'AnonPages: 117440 kB' 'Shmem: 10492 kB' 'KernelStack: 6496 kB' 'PageTables: 3576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63276 kB' 'Slab: 162104 kB' 'SReclaimable: 63276 kB' 'SUnreclaim: 98828 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.440 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.440 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.441 14:04:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.441 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.441 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.441 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.441 14:04:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.441 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.441 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.441 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.441 14:04:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.441 14:04:34 -- setup/common.sh@32 -- # continue 00:03:36.441 14:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.441 14:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.441 14:04:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.441 14:04:34 -- setup/common.sh@33 -- # echo 0 00:03:36.441 14:04:34 -- setup/common.sh@33 -- # return 0 00:03:36.441 14:04:34 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.441 14:04:34 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.441 14:04:34 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.441 node0=1024 expecting 1024 00:03:36.441 14:04:34 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.441 14:04:34 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:36.441 14:04:34 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:36.441 00:03:36.441 real 0m1.220s 00:03:36.441 user 0m0.536s 00:03:36.441 sys 0m0.720s 00:03:36.441 14:04:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:36.441 ************************************ 00:03:36.441 END TEST no_shrink_alloc 00:03:36.441 ************************************ 00:03:36.441 14:04:34 -- common/autotest_common.sh@10 -- # set +x 00:03:36.441 14:04:34 -- setup/hugepages.sh@217 -- # clear_hp 00:03:36.441 14:04:34 -- setup/hugepages.sh@37 -- # local node hp 00:03:36.441 14:04:34 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:36.441 14:04:34 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:36.441 14:04:34 -- setup/hugepages.sh@41 -- # echo 0 00:03:36.441 14:04:34 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:36.441 14:04:34 -- setup/hugepages.sh@41 -- # echo 0 00:03:36.441 14:04:34 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:36.441 14:04:34 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:36.441 00:03:36.441 real 0m5.670s 00:03:36.441 user 0m2.226s 00:03:36.441 sys 0m3.296s 00:03:36.441 14:04:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:36.441 ************************************ 00:03:36.441 END TEST hugepages 00:03:36.441 ************************************ 00:03:36.441 14:04:34 -- common/autotest_common.sh@10 -- # set +x 00:03:36.441 14:04:34 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:36.441 14:04:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:36.441 14:04:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:36.441 14:04:34 -- common/autotest_common.sh@10 -- # set +x 00:03:36.441 ************************************ 00:03:36.441 START TEST driver 00:03:36.441 ************************************ 00:03:36.441 14:04:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:36.703 * Looking for test storage... 00:03:36.703 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:36.703 14:04:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:36.703 14:04:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:36.703 14:04:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:36.703 14:04:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:36.703 14:04:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:36.703 14:04:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:36.703 14:04:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:36.703 14:04:35 -- scripts/common.sh@335 -- # IFS=.-: 00:03:36.703 14:04:35 -- scripts/common.sh@335 -- # read -ra ver1 00:03:36.703 14:04:35 -- scripts/common.sh@336 -- # IFS=.-: 00:03:36.703 14:04:35 -- scripts/common.sh@336 -- # read -ra ver2 00:03:36.703 14:04:35 -- scripts/common.sh@337 -- # local 'op=<' 00:03:36.703 14:04:35 -- scripts/common.sh@339 -- # ver1_l=2 00:03:36.703 14:04:35 -- scripts/common.sh@340 -- # ver2_l=1 00:03:36.703 14:04:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:36.703 14:04:35 -- scripts/common.sh@343 -- # case "$op" in 00:03:36.703 14:04:35 -- scripts/common.sh@344 -- # : 1 00:03:36.703 14:04:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:36.703 14:04:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:36.703 14:04:35 -- scripts/common.sh@364 -- # decimal 1 00:03:36.703 14:04:35 -- scripts/common.sh@352 -- # local d=1 00:03:36.703 14:04:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:36.703 14:04:35 -- scripts/common.sh@354 -- # echo 1 00:03:36.703 14:04:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:36.703 14:04:35 -- scripts/common.sh@365 -- # decimal 2 00:03:36.703 14:04:35 -- scripts/common.sh@352 -- # local d=2 00:03:36.703 14:04:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:36.703 14:04:35 -- scripts/common.sh@354 -- # echo 2 00:03:36.703 14:04:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:36.703 14:04:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:36.703 14:04:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:36.703 14:04:35 -- scripts/common.sh@367 -- # return 0 00:03:36.703 14:04:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:36.703 14:04:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:36.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.703 --rc genhtml_branch_coverage=1 00:03:36.703 --rc genhtml_function_coverage=1 00:03:36.703 --rc genhtml_legend=1 00:03:36.703 --rc geninfo_all_blocks=1 00:03:36.703 --rc geninfo_unexecuted_blocks=1 00:03:36.703 00:03:36.703 ' 00:03:36.703 14:04:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:36.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.703 --rc genhtml_branch_coverage=1 00:03:36.703 --rc genhtml_function_coverage=1 00:03:36.703 --rc genhtml_legend=1 00:03:36.703 --rc geninfo_all_blocks=1 00:03:36.703 --rc geninfo_unexecuted_blocks=1 00:03:36.703 00:03:36.703 ' 00:03:36.703 14:04:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:36.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.703 --rc genhtml_branch_coverage=1 00:03:36.703 --rc genhtml_function_coverage=1 00:03:36.703 --rc genhtml_legend=1 00:03:36.703 --rc geninfo_all_blocks=1 00:03:36.703 --rc geninfo_unexecuted_blocks=1 00:03:36.703 00:03:36.703 ' 00:03:36.703 14:04:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:36.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.703 --rc genhtml_branch_coverage=1 00:03:36.703 --rc genhtml_function_coverage=1 00:03:36.703 --rc genhtml_legend=1 00:03:36.703 --rc geninfo_all_blocks=1 00:03:36.703 --rc geninfo_unexecuted_blocks=1 00:03:36.703 00:03:36.703 ' 00:03:36.703 14:04:35 -- setup/driver.sh@68 -- # setup reset 00:03:36.703 14:04:35 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:36.704 14:04:35 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:43.287 14:04:41 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:43.287 14:04:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:43.287 14:04:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:43.287 14:04:41 -- common/autotest_common.sh@10 -- # set +x 00:03:43.287 ************************************ 00:03:43.287 START TEST guess_driver 00:03:43.287 ************************************ 00:03:43.287 14:04:41 -- common/autotest_common.sh@1114 -- # guess_driver 00:03:43.287 14:04:41 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:43.287 14:04:41 -- setup/driver.sh@47 -- # local fail=0 00:03:43.287 14:04:41 -- setup/driver.sh@49 -- # pick_driver 00:03:43.287 14:04:41 -- setup/driver.sh@36 -- # vfio 00:03:43.287 14:04:41 -- setup/driver.sh@21 -- # local iommu_grups 00:03:43.287 14:04:41 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:43.287 14:04:41 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:43.287 14:04:41 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:43.287 14:04:41 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:03:43.287 14:04:41 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:03:43.287 14:04:41 -- setup/driver.sh@32 -- # return 1 00:03:43.287 14:04:41 -- setup/driver.sh@38 -- # uio 00:03:43.287 14:04:41 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:03:43.287 14:04:41 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:03:43.287 14:04:41 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:03:43.287 14:04:41 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:03:43.287 14:04:41 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:03:43.287 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:03:43.287 14:04:41 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:03:43.287 14:04:41 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:03:43.287 14:04:41 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:43.287 Looking for driver=uio_pci_generic 00:03:43.287 14:04:41 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:03:43.287 14:04:41 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.287 14:04:41 -- setup/driver.sh@45 -- # setup output config 00:03:43.287 14:04:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.287 14:04:41 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:43.544 14:04:42 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:03:43.545 14:04:42 -- setup/driver.sh@58 -- # continue 00:03:43.545 14:04:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.803 14:04:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.803 14:04:42 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:43.803 14:04:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.803 14:04:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.803 14:04:42 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:43.803 14:04:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.803 14:04:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.803 14:04:42 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:43.803 14:04:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.803 14:04:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.803 14:04:42 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:43.803 14:04:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.803 14:04:42 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:43.803 14:04:42 -- setup/driver.sh@65 -- # setup reset 00:03:43.803 14:04:42 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:43.803 14:04:42 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:50.364 00:03:50.364 real 0m6.947s 00:03:50.364 user 0m0.679s 00:03:50.364 sys 0m1.184s 00:03:50.364 14:04:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:50.364 ************************************ 00:03:50.364 END TEST guess_driver 00:03:50.364 ************************************ 00:03:50.364 14:04:48 -- common/autotest_common.sh@10 -- # set +x 00:03:50.364 00:03:50.364 real 0m13.134s 00:03:50.364 user 0m1.080s 00:03:50.364 sys 0m2.013s 00:03:50.364 14:04:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:50.364 14:04:48 -- common/autotest_common.sh@10 -- # set +x 00:03:50.364 ************************************ 00:03:50.364 END TEST driver 00:03:50.364 ************************************ 00:03:50.364 14:04:48 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:50.364 14:04:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:50.364 14:04:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:50.364 14:04:48 -- common/autotest_common.sh@10 -- # set +x 00:03:50.364 ************************************ 00:03:50.364 START TEST devices 00:03:50.364 ************************************ 00:03:50.364 14:04:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:50.364 * Looking for test storage... 00:03:50.364 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:50.364 14:04:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:50.364 14:04:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:50.364 14:04:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:50.364 14:04:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:50.364 14:04:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:50.364 14:04:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:50.364 14:04:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:50.364 14:04:48 -- scripts/common.sh@335 -- # IFS=.-: 00:03:50.364 14:04:48 -- scripts/common.sh@335 -- # read -ra ver1 00:03:50.364 14:04:48 -- scripts/common.sh@336 -- # IFS=.-: 00:03:50.364 14:04:48 -- scripts/common.sh@336 -- # read -ra ver2 00:03:50.364 14:04:48 -- scripts/common.sh@337 -- # local 'op=<' 00:03:50.364 14:04:48 -- scripts/common.sh@339 -- # ver1_l=2 00:03:50.364 14:04:48 -- scripts/common.sh@340 -- # ver2_l=1 00:03:50.364 14:04:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:50.364 14:04:48 -- scripts/common.sh@343 -- # case "$op" in 00:03:50.364 14:04:48 -- scripts/common.sh@344 -- # : 1 00:03:50.364 14:04:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:50.364 14:04:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:50.364 14:04:48 -- scripts/common.sh@364 -- # decimal 1 00:03:50.364 14:04:48 -- scripts/common.sh@352 -- # local d=1 00:03:50.364 14:04:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:50.364 14:04:48 -- scripts/common.sh@354 -- # echo 1 00:03:50.364 14:04:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:50.364 14:04:48 -- scripts/common.sh@365 -- # decimal 2 00:03:50.364 14:04:48 -- scripts/common.sh@352 -- # local d=2 00:03:50.364 14:04:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:50.364 14:04:48 -- scripts/common.sh@354 -- # echo 2 00:03:50.364 14:04:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:50.364 14:04:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:50.364 14:04:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:50.364 14:04:48 -- scripts/common.sh@367 -- # return 0 00:03:50.364 14:04:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:50.364 14:04:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:50.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.364 --rc genhtml_branch_coverage=1 00:03:50.364 --rc genhtml_function_coverage=1 00:03:50.364 --rc genhtml_legend=1 00:03:50.364 --rc geninfo_all_blocks=1 00:03:50.364 --rc geninfo_unexecuted_blocks=1 00:03:50.364 00:03:50.364 ' 00:03:50.364 14:04:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:50.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.364 --rc genhtml_branch_coverage=1 00:03:50.364 --rc genhtml_function_coverage=1 00:03:50.364 --rc genhtml_legend=1 00:03:50.364 --rc geninfo_all_blocks=1 00:03:50.364 --rc geninfo_unexecuted_blocks=1 00:03:50.364 00:03:50.364 ' 00:03:50.364 14:04:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:50.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.364 --rc genhtml_branch_coverage=1 00:03:50.364 --rc genhtml_function_coverage=1 00:03:50.364 --rc genhtml_legend=1 00:03:50.364 --rc geninfo_all_blocks=1 00:03:50.364 --rc geninfo_unexecuted_blocks=1 00:03:50.364 00:03:50.364 ' 00:03:50.364 14:04:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:50.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.364 --rc genhtml_branch_coverage=1 00:03:50.364 --rc genhtml_function_coverage=1 00:03:50.364 --rc genhtml_legend=1 00:03:50.364 --rc geninfo_all_blocks=1 00:03:50.364 --rc geninfo_unexecuted_blocks=1 00:03:50.364 00:03:50.364 ' 00:03:50.364 14:04:48 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:50.364 14:04:48 -- setup/devices.sh@192 -- # setup reset 00:03:50.364 14:04:48 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:50.364 14:04:48 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:50.931 14:04:49 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:50.931 14:04:49 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:50.931 14:04:49 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:50.931 14:04:49 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:50.931 14:04:49 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:50.931 14:04:49 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:03:50.931 14:04:49 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:03:50.931 14:04:49 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:03:50.931 14:04:49 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:50.931 14:04:49 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:50.931 14:04:49 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:50.931 14:04:49 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:50.931 14:04:49 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:50.931 14:04:49 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:50.931 14:04:49 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:50.931 14:04:49 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:03:50.931 14:04:49 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:03:50.931 14:04:49 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:50.931 14:04:49 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:50.931 14:04:49 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:50.931 14:04:49 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:03:50.931 14:04:49 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:03:50.931 14:04:49 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:50.931 14:04:49 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:50.931 14:04:49 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:50.931 14:04:49 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:03:50.931 14:04:49 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:03:50.931 14:04:49 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:50.931 14:04:49 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:50.931 14:04:49 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:50.931 14:04:49 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:03:50.931 14:04:49 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:03:50.931 14:04:49 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:50.931 14:04:49 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:50.931 14:04:49 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:50.931 14:04:49 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:03:50.931 14:04:49 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:03:50.931 14:04:49 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:50.931 14:04:49 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:50.931 14:04:49 -- setup/devices.sh@196 -- # blocks=() 00:03:50.931 14:04:49 -- setup/devices.sh@196 -- # declare -a blocks 00:03:50.931 14:04:49 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:50.931 14:04:49 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:50.931 14:04:49 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:50.931 14:04:49 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:50.931 14:04:49 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:50.931 14:04:49 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:50.931 14:04:49 -- setup/devices.sh@202 -- # pci=0000:00:09.0 00:03:50.931 14:04:49 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:03:50.931 14:04:49 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:50.931 14:04:49 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:03:50.931 14:04:49 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:03:50.931 No valid GPT data, bailing 00:03:50.931 14:04:49 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:50.931 14:04:49 -- scripts/common.sh@393 -- # pt= 00:03:50.931 14:04:49 -- scripts/common.sh@394 -- # return 1 00:03:50.931 14:04:49 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:50.931 14:04:49 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:50.931 14:04:49 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:50.931 14:04:49 -- setup/common.sh@80 -- # echo 1073741824 00:03:50.931 14:04:49 -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:03:50.931 14:04:49 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:50.931 14:04:49 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:50.931 14:04:49 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:50.931 14:04:49 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:03:50.931 14:04:49 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:03:50.931 14:04:49 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:50.931 14:04:49 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:03:50.931 14:04:49 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:03:50.931 No valid GPT data, bailing 00:03:50.931 14:04:49 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:50.931 14:04:49 -- scripts/common.sh@393 -- # pt= 00:03:50.931 14:04:49 -- scripts/common.sh@394 -- # return 1 00:03:50.931 14:04:49 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:50.931 14:04:49 -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:50.931 14:04:49 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:50.931 14:04:49 -- setup/common.sh@80 -- # echo 4294967296 00:03:50.931 14:04:49 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:50.931 14:04:49 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:50.931 14:04:49 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:03:50.931 14:04:49 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:50.931 14:04:49 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:03:50.931 14:04:49 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:50.931 14:04:49 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:03:50.931 14:04:49 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:03:50.931 14:04:49 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:03:50.931 14:04:49 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:03:50.931 14:04:49 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:03:51.192 No valid GPT data, bailing 00:03:51.192 14:04:49 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:51.192 14:04:49 -- scripts/common.sh@393 -- # pt= 00:03:51.192 14:04:49 -- scripts/common.sh@394 -- # return 1 00:03:51.192 14:04:49 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:03:51.192 14:04:49 -- setup/common.sh@76 -- # local dev=nvme1n2 00:03:51.192 14:04:49 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:03:51.192 14:04:49 -- setup/common.sh@80 -- # echo 4294967296 00:03:51.192 14:04:49 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:51.192 14:04:49 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:51.192 14:04:49 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:03:51.192 14:04:49 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:51.192 14:04:49 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:03:51.192 14:04:49 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:51.192 14:04:49 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:03:51.192 14:04:49 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:03:51.192 14:04:49 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:03:51.192 14:04:49 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:03:51.192 14:04:49 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:03:51.192 No valid GPT data, bailing 00:03:51.192 14:04:49 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:51.192 14:04:49 -- scripts/common.sh@393 -- # pt= 00:03:51.192 14:04:49 -- scripts/common.sh@394 -- # return 1 00:03:51.192 14:04:49 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:03:51.192 14:04:49 -- setup/common.sh@76 -- # local dev=nvme1n3 00:03:51.192 14:04:49 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:03:51.192 14:04:49 -- setup/common.sh@80 -- # echo 4294967296 00:03:51.192 14:04:49 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:51.192 14:04:49 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:51.192 14:04:49 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:03:51.192 14:04:49 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:51.192 14:04:49 -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:03:51.192 14:04:49 -- setup/devices.sh@201 -- # ctrl=nvme2 00:03:51.192 14:04:49 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:03:51.192 14:04:49 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:03:51.192 14:04:49 -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:03:51.192 14:04:49 -- scripts/common.sh@380 -- # local block=nvme2n1 pt 00:03:51.192 14:04:49 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:03:51.192 No valid GPT data, bailing 00:03:51.192 14:04:49 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:51.192 14:04:49 -- scripts/common.sh@393 -- # pt= 00:03:51.192 14:04:49 -- scripts/common.sh@394 -- # return 1 00:03:51.192 14:04:49 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:03:51.192 14:04:49 -- setup/common.sh@76 -- # local dev=nvme2n1 00:03:51.192 14:04:49 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:03:51.192 14:04:49 -- setup/common.sh@80 -- # echo 6343335936 00:03:51.192 14:04:49 -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:03:51.192 14:04:49 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:51.192 14:04:49 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:03:51.192 14:04:49 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:51.192 14:04:49 -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:03:51.192 14:04:49 -- setup/devices.sh@201 -- # ctrl=nvme3 00:03:51.192 14:04:49 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:03:51.192 14:04:49 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:51.192 14:04:49 -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:03:51.192 14:04:49 -- scripts/common.sh@380 -- # local block=nvme3n1 pt 00:03:51.192 14:04:49 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:03:51.192 No valid GPT data, bailing 00:03:51.453 14:04:49 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:51.453 14:04:49 -- scripts/common.sh@393 -- # pt= 00:03:51.453 14:04:49 -- scripts/common.sh@394 -- # return 1 00:03:51.453 14:04:49 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:03:51.453 14:04:49 -- setup/common.sh@76 -- # local dev=nvme3n1 00:03:51.453 14:04:49 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:03:51.453 14:04:49 -- setup/common.sh@80 -- # echo 5368709120 00:03:51.453 14:04:49 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:03:51.453 14:04:49 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:51.453 14:04:49 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:03:51.453 14:04:49 -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:03:51.453 14:04:49 -- setup/devices.sh@211 -- # declare -r test_disk=nvme1n1 00:03:51.453 14:04:49 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:51.453 14:04:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:51.453 14:04:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:51.453 14:04:49 -- common/autotest_common.sh@10 -- # set +x 00:03:51.453 ************************************ 00:03:51.453 START TEST nvme_mount 00:03:51.453 ************************************ 00:03:51.453 14:04:49 -- common/autotest_common.sh@1114 -- # nvme_mount 00:03:51.453 14:04:49 -- setup/devices.sh@95 -- # nvme_disk=nvme1n1 00:03:51.453 14:04:49 -- setup/devices.sh@96 -- # nvme_disk_p=nvme1n1p1 00:03:51.453 14:04:49 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:51.453 14:04:49 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:51.453 14:04:49 -- setup/devices.sh@101 -- # partition_drive nvme1n1 1 00:03:51.453 14:04:49 -- setup/common.sh@39 -- # local disk=nvme1n1 00:03:51.453 14:04:49 -- setup/common.sh@40 -- # local part_no=1 00:03:51.453 14:04:49 -- setup/common.sh@41 -- # local size=1073741824 00:03:51.453 14:04:49 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:51.453 14:04:49 -- setup/common.sh@44 -- # parts=() 00:03:51.453 14:04:49 -- setup/common.sh@44 -- # local parts 00:03:51.453 14:04:49 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:51.453 14:04:49 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:51.453 14:04:49 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:51.453 14:04:49 -- setup/common.sh@46 -- # (( part++ )) 00:03:51.453 14:04:49 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:51.453 14:04:49 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:51.453 14:04:49 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:03:51.453 14:04:49 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 00:03:52.393 Creating new GPT entries in memory. 00:03:52.393 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:52.393 other utilities. 00:03:52.393 14:04:50 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:52.393 14:04:50 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:52.393 14:04:50 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:52.393 14:04:50 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:52.393 14:04:50 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:03:53.777 Creating new GPT entries in memory. 00:03:53.777 The operation has completed successfully. 00:03:53.777 14:04:51 -- setup/common.sh@57 -- # (( part++ )) 00:03:53.777 14:04:51 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:53.777 14:04:51 -- setup/common.sh@62 -- # wait 53727 00:03:53.777 14:04:51 -- setup/devices.sh@102 -- # mkfs /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.777 14:04:51 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:03:53.777 14:04:51 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.777 14:04:51 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1p1 ]] 00:03:53.777 14:04:51 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1p1 00:03:53.777 14:04:51 -- setup/common.sh@72 -- # mount /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.777 14:04:52 -- setup/devices.sh@105 -- # verify 0000:00:08.0 nvme1n1:nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:53.777 14:04:52 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:03:53.777 14:04:52 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1p1 00:03:53.777 14:04:52 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.777 14:04:52 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:53.777 14:04:52 -- setup/devices.sh@53 -- # local found=0 00:03:53.777 14:04:52 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:53.777 14:04:52 -- setup/devices.sh@56 -- # : 00:03:53.777 14:04:52 -- setup/devices.sh@59 -- # local pci status 00:03:53.777 14:04:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.777 14:04:52 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:03:53.777 14:04:52 -- setup/devices.sh@47 -- # setup output config 00:03:53.777 14:04:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.777 14:04:52 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:53.777 14:04:52 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:53.777 14:04:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.777 14:04:52 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:53.777 14:04:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.038 14:04:52 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:54.038 14:04:52 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1\p\1* ]] 00:03:54.038 14:04:52 -- setup/devices.sh@63 -- # found=1 00:03:54.038 14:04:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.038 14:04:52 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:54.038 14:04:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.298 14:04:52 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:54.298 14:04:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.298 14:04:52 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:54.298 14:04:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.298 14:04:52 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:54.298 14:04:52 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:54.298 14:04:52 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.298 14:04:52 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:54.298 14:04:52 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:54.298 14:04:52 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:54.298 14:04:52 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.298 14:04:52 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.298 14:04:52 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:03:54.298 14:04:52 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:03:54.298 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:54.298 14:04:52 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:03:54.298 14:04:52 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:03:54.559 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:54.559 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:54.559 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:54.559 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:03:54.559 14:04:53 -- setup/devices.sh@113 -- # mkfs /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:03:54.559 14:04:53 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:03:54.559 14:04:53 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.559 14:04:53 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1 ]] 00:03:54.559 14:04:53 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1 1024M 00:03:54.820 14:04:53 -- setup/common.sh@72 -- # mount /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.820 14:04:53 -- setup/devices.sh@116 -- # verify 0000:00:08.0 nvme1n1:nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:54.820 14:04:53 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:03:54.820 14:04:53 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1 00:03:54.820 14:04:53 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.820 14:04:53 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:54.820 14:04:53 -- setup/devices.sh@53 -- # local found=0 00:03:54.820 14:04:53 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:54.820 14:04:53 -- setup/devices.sh@56 -- # : 00:03:54.820 14:04:53 -- setup/devices.sh@59 -- # local pci status 00:03:54.820 14:04:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.820 14:04:53 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:03:54.820 14:04:53 -- setup/devices.sh@47 -- # setup output config 00:03:54.820 14:04:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.820 14:04:53 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:54.820 14:04:53 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:54.820 14:04:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.081 14:04:53 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:55.081 14:04:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.343 14:04:53 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:55.343 14:04:53 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1* ]] 00:03:55.343 14:04:53 -- setup/devices.sh@63 -- # found=1 00:03:55.343 14:04:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.343 14:04:53 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:55.343 14:04:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.343 14:04:53 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:55.343 14:04:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.604 14:04:53 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:55.604 14:04:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.604 14:04:53 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:55.604 14:04:53 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:55.604 14:04:53 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:55.604 14:04:53 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:55.604 14:04:53 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:55.604 14:04:53 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:55.604 14:04:54 -- setup/devices.sh@125 -- # verify 0000:00:08.0 data@nvme1n1 '' '' 00:03:55.604 14:04:54 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:03:55.604 14:04:54 -- setup/devices.sh@49 -- # local mounts=data@nvme1n1 00:03:55.604 14:04:54 -- setup/devices.sh@50 -- # local mount_point= 00:03:55.604 14:04:54 -- setup/devices.sh@51 -- # local test_file= 00:03:55.604 14:04:54 -- setup/devices.sh@53 -- # local found=0 00:03:55.604 14:04:54 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:55.604 14:04:54 -- setup/devices.sh@59 -- # local pci status 00:03:55.604 14:04:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.604 14:04:54 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:03:55.604 14:04:54 -- setup/devices.sh@47 -- # setup output config 00:03:55.604 14:04:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.604 14:04:54 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:55.604 14:04:54 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:55.604 14:04:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.865 14:04:54 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:55.865 14:04:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.126 14:04:54 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:56.126 14:04:54 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\1\n\1* ]] 00:03:56.126 14:04:54 -- setup/devices.sh@63 -- # found=1 00:03:56.126 14:04:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.126 14:04:54 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:56.126 14:04:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.388 14:04:54 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:56.388 14:04:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.388 14:04:54 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:56.388 14:04:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.388 14:04:54 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:56.388 14:04:54 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:56.388 14:04:54 -- setup/devices.sh@68 -- # return 0 00:03:56.388 14:04:54 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:56.388 14:04:54 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:56.388 14:04:54 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:03:56.388 14:04:54 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:03:56.388 14:04:54 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:03:56.388 /dev/nvme1n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:56.388 00:03:56.388 real 0m5.085s 00:03:56.388 user 0m0.966s 00:03:56.388 sys 0m1.332s 00:03:56.388 14:04:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:56.388 14:04:54 -- common/autotest_common.sh@10 -- # set +x 00:03:56.388 ************************************ 00:03:56.388 END TEST nvme_mount 00:03:56.388 ************************************ 00:03:56.388 14:04:54 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:56.388 14:04:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:56.388 14:04:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:56.388 14:04:54 -- common/autotest_common.sh@10 -- # set +x 00:03:56.647 ************************************ 00:03:56.647 START TEST dm_mount 00:03:56.647 ************************************ 00:03:56.647 14:04:54 -- common/autotest_common.sh@1114 -- # dm_mount 00:03:56.647 14:04:54 -- setup/devices.sh@144 -- # pv=nvme1n1 00:03:56.647 14:04:54 -- setup/devices.sh@145 -- # pv0=nvme1n1p1 00:03:56.647 14:04:54 -- setup/devices.sh@146 -- # pv1=nvme1n1p2 00:03:56.647 14:04:54 -- setup/devices.sh@148 -- # partition_drive nvme1n1 00:03:56.647 14:04:54 -- setup/common.sh@39 -- # local disk=nvme1n1 00:03:56.647 14:04:54 -- setup/common.sh@40 -- # local part_no=2 00:03:56.647 14:04:54 -- setup/common.sh@41 -- # local size=1073741824 00:03:56.647 14:04:54 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:56.647 14:04:54 -- setup/common.sh@44 -- # parts=() 00:03:56.647 14:04:54 -- setup/common.sh@44 -- # local parts 00:03:56.647 14:04:54 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:56.647 14:04:54 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:56.647 14:04:54 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:56.647 14:04:54 -- setup/common.sh@46 -- # (( part++ )) 00:03:56.647 14:04:54 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:56.647 14:04:54 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:56.647 14:04:54 -- setup/common.sh@46 -- # (( part++ )) 00:03:56.647 14:04:54 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:56.647 14:04:54 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:56.647 14:04:54 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:03:56.647 14:04:54 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 nvme1n1p2 00:03:57.592 Creating new GPT entries in memory. 00:03:57.592 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:57.592 other utilities. 00:03:57.592 14:04:55 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:57.592 14:04:55 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:57.592 14:04:55 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:57.592 14:04:55 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:57.592 14:04:55 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:03:58.581 Creating new GPT entries in memory. 00:03:58.581 The operation has completed successfully. 00:03:58.581 14:04:57 -- setup/common.sh@57 -- # (( part++ )) 00:03:58.581 14:04:57 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:58.581 14:04:57 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:58.581 14:04:57 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:58.581 14:04:57 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=2:264192:526335 00:03:59.526 The operation has completed successfully. 00:03:59.526 14:04:58 -- setup/common.sh@57 -- # (( part++ )) 00:03:59.526 14:04:58 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:59.526 14:04:58 -- setup/common.sh@62 -- # wait 54355 00:03:59.787 14:04:58 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:59.787 14:04:58 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:59.787 14:04:58 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:59.787 14:04:58 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:59.787 14:04:58 -- setup/devices.sh@160 -- # for t in {1..5} 00:03:59.787 14:04:58 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:59.787 14:04:58 -- setup/devices.sh@161 -- # break 00:03:59.787 14:04:58 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:59.787 14:04:58 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:59.787 14:04:58 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:59.787 14:04:58 -- setup/devices.sh@166 -- # dm=dm-0 00:03:59.787 14:04:58 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme1n1p1/holders/dm-0 ]] 00:03:59.787 14:04:58 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme1n1p2/holders/dm-0 ]] 00:03:59.787 14:04:58 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:59.787 14:04:58 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:03:59.787 14:04:58 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:59.787 14:04:58 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:59.787 14:04:58 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:59.787 14:04:58 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:59.787 14:04:58 -- setup/devices.sh@174 -- # verify 0000:00:08.0 nvme1n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:59.787 14:04:58 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:03:59.787 14:04:58 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme_dm_test 00:03:59.787 14:04:58 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:59.787 14:04:58 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:59.787 14:04:58 -- setup/devices.sh@53 -- # local found=0 00:03:59.787 14:04:58 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:59.787 14:04:58 -- setup/devices.sh@56 -- # : 00:03:59.787 14:04:58 -- setup/devices.sh@59 -- # local pci status 00:03:59.787 14:04:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.787 14:04:58 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:03:59.787 14:04:58 -- setup/devices.sh@47 -- # setup output config 00:03:59.787 14:04:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.787 14:04:58 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:00.048 14:04:58 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:00.048 14:04:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.048 14:04:58 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:00.048 14:04:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.309 14:04:58 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:00.309 14:04:58 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0,mount@nvme1n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:00.309 14:04:58 -- setup/devices.sh@63 -- # found=1 00:04:00.309 14:04:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.309 14:04:58 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:00.309 14:04:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.570 14:04:58 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:00.570 14:04:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.570 14:04:58 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:00.570 14:04:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.570 14:04:59 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:00.570 14:04:59 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:00.570 14:04:59 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:00.570 14:04:59 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:00.570 14:04:59 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:00.570 14:04:59 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:00.570 14:04:59 -- setup/devices.sh@184 -- # verify 0000:00:08.0 holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 '' '' 00:04:00.570 14:04:59 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:00.570 14:04:59 -- setup/devices.sh@49 -- # local mounts=holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 00:04:00.570 14:04:59 -- setup/devices.sh@50 -- # local mount_point= 00:04:00.570 14:04:59 -- setup/devices.sh@51 -- # local test_file= 00:04:00.570 14:04:59 -- setup/devices.sh@53 -- # local found=0 00:04:00.570 14:04:59 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:00.570 14:04:59 -- setup/devices.sh@59 -- # local pci status 00:04:00.570 14:04:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.570 14:04:59 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:00.570 14:04:59 -- setup/devices.sh@47 -- # setup output config 00:04:00.570 14:04:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.570 14:04:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:00.847 14:04:59 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:00.847 14:04:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.847 14:04:59 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:00.847 14:04:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.111 14:04:59 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:01.111 14:04:59 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\2\:\d\m\-\0* ]] 00:04:01.111 14:04:59 -- setup/devices.sh@63 -- # found=1 00:04:01.111 14:04:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.111 14:04:59 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:01.112 14:04:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.373 14:04:59 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:01.373 14:04:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.373 14:04:59 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:01.373 14:04:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.373 14:04:59 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:01.373 14:04:59 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:01.373 14:04:59 -- setup/devices.sh@68 -- # return 0 00:04:01.373 14:04:59 -- setup/devices.sh@187 -- # cleanup_dm 00:04:01.373 14:04:59 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:01.373 14:04:59 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:01.373 14:04:59 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:01.373 14:04:59 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:01.373 14:04:59 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme1n1p1 00:04:01.373 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:01.373 14:04:59 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:04:01.373 14:04:59 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme1n1p2 00:04:01.373 ************************************ 00:04:01.373 END TEST dm_mount 00:04:01.373 00:04:01.373 real 0m4.939s 00:04:01.373 user 0m0.658s 00:04:01.373 sys 0m0.905s 00:04:01.373 14:04:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:01.373 14:04:59 -- common/autotest_common.sh@10 -- # set +x 00:04:01.373 ************************************ 00:04:01.634 14:04:59 -- setup/devices.sh@1 -- # cleanup 00:04:01.634 14:04:59 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:01.634 14:04:59 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:01.634 14:04:59 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:01.634 14:04:59 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:04:01.635 14:04:59 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:01.635 14:04:59 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:01.896 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:01.896 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:01.896 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:01.896 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:04:01.896 14:05:00 -- setup/devices.sh@12 -- # cleanup_dm 00:04:01.896 14:05:00 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:01.896 14:05:00 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:01.896 14:05:00 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:01.897 14:05:00 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:04:01.897 14:05:00 -- setup/devices.sh@14 -- # [[ -b /dev/nvme1n1 ]] 00:04:01.897 14:05:00 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme1n1 00:04:01.897 00:04:01.897 real 0m12.129s 00:04:01.897 user 0m2.391s 00:04:01.897 sys 0m2.961s 00:04:01.897 ************************************ 00:04:01.897 END TEST devices 00:04:01.897 ************************************ 00:04:01.897 14:05:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:01.897 14:05:00 -- common/autotest_common.sh@10 -- # set +x 00:04:01.897 00:04:01.897 real 0m42.626s 00:04:01.897 user 0m8.146s 00:04:01.897 sys 0m11.839s 00:04:01.897 14:05:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:01.897 ************************************ 00:04:01.897 END TEST setup.sh 00:04:01.897 ************************************ 00:04:01.897 14:05:00 -- common/autotest_common.sh@10 -- # set +x 00:04:01.897 14:05:00 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:02.158 Hugepages 00:04:02.159 node hugesize free / total 00:04:02.159 node0 1048576kB 0 / 0 00:04:02.159 node0 2048kB 2048 / 2048 00:04:02.159 00:04:02.159 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:02.159 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:02.159 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:04:02.420 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:02.420 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:02.420 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:02.420 14:05:00 -- spdk/autotest.sh@128 -- # uname -s 00:04:02.420 14:05:00 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:04:02.420 14:05:00 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:04:02.420 14:05:00 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:03.365 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.365 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.365 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.626 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.626 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.626 14:05:02 -- common/autotest_common.sh@1527 -- # sleep 1 00:04:04.568 14:05:03 -- common/autotest_common.sh@1528 -- # bdfs=() 00:04:04.568 14:05:03 -- common/autotest_common.sh@1528 -- # local bdfs 00:04:04.568 14:05:03 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:04:04.568 14:05:03 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:04:04.568 14:05:03 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:04.568 14:05:03 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:04.568 14:05:03 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:04.568 14:05:03 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:04.568 14:05:03 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:04.568 14:05:03 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:04:04.568 14:05:03 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:04.569 14:05:03 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:05.135 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:05.135 Waiting for block devices as requested 00:04:05.135 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:04:05.135 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:04:05.395 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:04:05.395 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:04:10.666 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:04:10.666 14:05:08 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:10.666 14:05:08 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:04:10.666 14:05:08 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:10.666 14:05:08 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:04:10.666 14:05:08 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:04:10.666 14:05:08 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 ]] 00:04:10.666 14:05:08 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:04:10.666 14:05:08 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme2 00:04:10.666 14:05:08 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme2 00:04:10.666 14:05:08 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme2 ]] 00:04:10.666 14:05:08 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:10.666 14:05:08 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:10.666 14:05:08 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:10.666 14:05:08 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:10.666 14:05:08 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:10.666 14:05:08 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:10.666 14:05:08 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme2 00:04:10.666 14:05:08 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:10.666 14:05:08 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:10.666 14:05:08 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:10.666 14:05:08 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:10.666 14:05:08 -- common/autotest_common.sh@1552 -- # continue 00:04:10.666 14:05:08 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:10.666 14:05:08 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:04:10.666 14:05:08 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:04:10.666 14:05:08 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:10.666 14:05:08 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:04:10.666 14:05:08 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 ]] 00:04:10.666 14:05:08 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:04:10.666 14:05:08 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme3 00:04:10.666 14:05:08 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme3 00:04:10.666 14:05:08 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme3 ]] 00:04:10.666 14:05:08 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:10.666 14:05:08 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:10.666 14:05:08 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:10.666 14:05:08 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:10.666 14:05:08 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:10.666 14:05:08 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:10.666 14:05:08 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme3 00:04:10.666 14:05:08 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:10.666 14:05:08 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:10.666 14:05:08 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:10.666 14:05:08 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:10.666 14:05:08 -- common/autotest_common.sh@1552 -- # continue 00:04:10.666 14:05:08 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:10.666 14:05:08 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:08.0 00:04:10.666 14:05:08 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:10.666 14:05:08 -- common/autotest_common.sh@1497 -- # grep 0000:00:08.0/nvme/nvme 00:04:10.666 14:05:08 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:04:10.666 14:05:08 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 ]] 00:04:10.666 14:05:08 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:04:10.666 14:05:08 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:04:10.666 14:05:08 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:04:10.666 14:05:08 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:04:10.666 14:05:08 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:10.666 14:05:08 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:10.666 14:05:08 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:10.666 14:05:09 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:10.666 14:05:09 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:10.666 14:05:09 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:10.666 14:05:09 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:10.666 14:05:09 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:10.666 14:05:09 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:04:10.666 14:05:09 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:10.666 14:05:09 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:10.666 14:05:09 -- common/autotest_common.sh@1552 -- # continue 00:04:10.666 14:05:09 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:10.666 14:05:09 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:09.0 00:04:10.666 14:05:09 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:10.666 14:05:09 -- common/autotest_common.sh@1497 -- # grep 0000:00:09.0/nvme/nvme 00:04:10.666 14:05:09 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:04:10.666 14:05:09 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 ]] 00:04:10.666 14:05:09 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:04:10.666 14:05:09 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:04:10.666 14:05:09 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:04:10.666 14:05:09 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:04:10.666 14:05:09 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:10.666 14:05:09 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:10.666 14:05:09 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:10.666 14:05:09 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:10.666 14:05:09 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:10.666 14:05:09 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:10.666 14:05:09 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:04:10.666 14:05:09 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:10.666 14:05:09 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:10.666 14:05:09 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:10.666 14:05:09 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:10.666 14:05:09 -- common/autotest_common.sh@1552 -- # continue 00:04:10.666 14:05:09 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:04:10.666 14:05:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:10.666 14:05:09 -- common/autotest_common.sh@10 -- # set +x 00:04:10.666 14:05:09 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:04:10.666 14:05:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:10.666 14:05:09 -- common/autotest_common.sh@10 -- # set +x 00:04:10.666 14:05:09 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:11.233 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:11.491 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:11.491 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:11.491 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:11.491 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:11.491 14:05:10 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:04:11.491 14:05:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:11.491 14:05:10 -- common/autotest_common.sh@10 -- # set +x 00:04:11.750 14:05:10 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:04:11.750 14:05:10 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:04:11.750 14:05:10 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:04:11.750 14:05:10 -- common/autotest_common.sh@1572 -- # bdfs=() 00:04:11.750 14:05:10 -- common/autotest_common.sh@1572 -- # local bdfs 00:04:11.750 14:05:10 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:04:11.750 14:05:10 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:11.750 14:05:10 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:11.750 14:05:10 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:11.750 14:05:10 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:11.750 14:05:10 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:11.750 14:05:10 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:04:11.750 14:05:10 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:11.750 14:05:10 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:11.750 14:05:10 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:04:11.750 14:05:10 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:11.750 14:05:10 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:11.750 14:05:10 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:11.750 14:05:10 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:04:11.750 14:05:10 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:11.750 14:05:10 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:11.750 14:05:10 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:11.750 14:05:10 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:08.0/device 00:04:11.750 14:05:10 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:11.750 14:05:10 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:11.750 14:05:10 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:11.750 14:05:10 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:09.0/device 00:04:11.750 14:05:10 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:11.750 14:05:10 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:11.750 14:05:10 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:04:11.750 14:05:10 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:04:11.750 14:05:10 -- common/autotest_common.sh@1588 -- # return 0 00:04:11.750 14:05:10 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:04:11.750 14:05:10 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:04:11.750 14:05:10 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:11.750 14:05:10 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:11.750 14:05:10 -- spdk/autotest.sh@160 -- # timing_enter lib 00:04:11.750 14:05:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:11.750 14:05:10 -- common/autotest_common.sh@10 -- # set +x 00:04:11.750 14:05:10 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:11.750 14:05:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:11.750 14:05:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:11.750 14:05:10 -- common/autotest_common.sh@10 -- # set +x 00:04:11.750 ************************************ 00:04:11.750 START TEST env 00:04:11.750 ************************************ 00:04:11.750 14:05:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:11.750 * Looking for test storage... 00:04:11.750 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:11.750 14:05:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:11.750 14:05:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:11.750 14:05:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:11.750 14:05:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:11.750 14:05:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:11.750 14:05:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:11.750 14:05:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:11.750 14:05:10 -- scripts/common.sh@335 -- # IFS=.-: 00:04:11.750 14:05:10 -- scripts/common.sh@335 -- # read -ra ver1 00:04:11.750 14:05:10 -- scripts/common.sh@336 -- # IFS=.-: 00:04:11.750 14:05:10 -- scripts/common.sh@336 -- # read -ra ver2 00:04:11.750 14:05:10 -- scripts/common.sh@337 -- # local 'op=<' 00:04:11.750 14:05:10 -- scripts/common.sh@339 -- # ver1_l=2 00:04:11.750 14:05:10 -- scripts/common.sh@340 -- # ver2_l=1 00:04:11.750 14:05:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:11.750 14:05:10 -- scripts/common.sh@343 -- # case "$op" in 00:04:11.750 14:05:10 -- scripts/common.sh@344 -- # : 1 00:04:11.750 14:05:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:11.750 14:05:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:11.750 14:05:10 -- scripts/common.sh@364 -- # decimal 1 00:04:11.750 14:05:10 -- scripts/common.sh@352 -- # local d=1 00:04:11.750 14:05:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:11.750 14:05:10 -- scripts/common.sh@354 -- # echo 1 00:04:11.750 14:05:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:11.750 14:05:10 -- scripts/common.sh@365 -- # decimal 2 00:04:11.750 14:05:10 -- scripts/common.sh@352 -- # local d=2 00:04:11.750 14:05:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:11.750 14:05:10 -- scripts/common.sh@354 -- # echo 2 00:04:11.750 14:05:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:11.750 14:05:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:11.750 14:05:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:11.750 14:05:10 -- scripts/common.sh@367 -- # return 0 00:04:11.750 14:05:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:11.750 14:05:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:11.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.750 --rc genhtml_branch_coverage=1 00:04:11.750 --rc genhtml_function_coverage=1 00:04:11.750 --rc genhtml_legend=1 00:04:11.750 --rc geninfo_all_blocks=1 00:04:11.750 --rc geninfo_unexecuted_blocks=1 00:04:11.750 00:04:11.750 ' 00:04:11.750 14:05:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:11.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.750 --rc genhtml_branch_coverage=1 00:04:11.750 --rc genhtml_function_coverage=1 00:04:11.750 --rc genhtml_legend=1 00:04:11.750 --rc geninfo_all_blocks=1 00:04:11.750 --rc geninfo_unexecuted_blocks=1 00:04:11.750 00:04:11.750 ' 00:04:11.751 14:05:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:11.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.751 --rc genhtml_branch_coverage=1 00:04:11.751 --rc genhtml_function_coverage=1 00:04:11.751 --rc genhtml_legend=1 00:04:11.751 --rc geninfo_all_blocks=1 00:04:11.751 --rc geninfo_unexecuted_blocks=1 00:04:11.751 00:04:11.751 ' 00:04:11.751 14:05:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:11.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.751 --rc genhtml_branch_coverage=1 00:04:11.751 --rc genhtml_function_coverage=1 00:04:11.751 --rc genhtml_legend=1 00:04:11.751 --rc geninfo_all_blocks=1 00:04:11.751 --rc geninfo_unexecuted_blocks=1 00:04:11.751 00:04:11.751 ' 00:04:11.751 14:05:10 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:11.751 14:05:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:11.751 14:05:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:11.751 14:05:10 -- common/autotest_common.sh@10 -- # set +x 00:04:11.751 ************************************ 00:04:11.751 START TEST env_memory 00:04:11.751 ************************************ 00:04:11.751 14:05:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:12.009 00:04:12.009 00:04:12.009 CUnit - A unit testing framework for C - Version 2.1-3 00:04:12.009 http://cunit.sourceforge.net/ 00:04:12.009 00:04:12.009 00:04:12.009 Suite: memory 00:04:12.009 Test: alloc and free memory map ...[2024-11-19 14:05:10.362023] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:12.009 passed 00:04:12.009 Test: mem map translation ...[2024-11-19 14:05:10.400702] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:12.009 [2024-11-19 14:05:10.400754] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:12.009 [2024-11-19 14:05:10.400813] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:12.009 [2024-11-19 14:05:10.400827] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:12.009 passed 00:04:12.009 Test: mem map registration ...[2024-11-19 14:05:10.468999] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:12.009 [2024-11-19 14:05:10.469034] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:12.009 passed 00:04:12.009 Test: mem map adjacent registrations ...passed 00:04:12.009 00:04:12.009 Run Summary: Type Total Ran Passed Failed Inactive 00:04:12.009 suites 1 1 n/a 0 0 00:04:12.009 tests 4 4 4 0 0 00:04:12.009 asserts 152 152 152 0 n/a 00:04:12.009 00:04:12.009 Elapsed time = 0.233 seconds 00:04:12.268 00:04:12.268 real 0m0.266s 00:04:12.268 user 0m0.237s 00:04:12.268 sys 0m0.024s 00:04:12.268 14:05:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:12.268 14:05:10 -- common/autotest_common.sh@10 -- # set +x 00:04:12.268 ************************************ 00:04:12.268 END TEST env_memory 00:04:12.268 ************************************ 00:04:12.268 14:05:10 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:12.268 14:05:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:12.268 14:05:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:12.268 14:05:10 -- common/autotest_common.sh@10 -- # set +x 00:04:12.268 ************************************ 00:04:12.268 START TEST env_vtophys 00:04:12.268 ************************************ 00:04:12.268 14:05:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:12.268 EAL: lib.eal log level changed from notice to debug 00:04:12.268 EAL: Detected lcore 0 as core 0 on socket 0 00:04:12.268 EAL: Detected lcore 1 as core 0 on socket 0 00:04:12.268 EAL: Detected lcore 2 as core 0 on socket 0 00:04:12.268 EAL: Detected lcore 3 as core 0 on socket 0 00:04:12.268 EAL: Detected lcore 4 as core 0 on socket 0 00:04:12.268 EAL: Detected lcore 5 as core 0 on socket 0 00:04:12.268 EAL: Detected lcore 6 as core 0 on socket 0 00:04:12.268 EAL: Detected lcore 7 as core 0 on socket 0 00:04:12.268 EAL: Detected lcore 8 as core 0 on socket 0 00:04:12.268 EAL: Detected lcore 9 as core 0 on socket 0 00:04:12.268 EAL: Maximum logical cores by configuration: 128 00:04:12.268 EAL: Detected CPU lcores: 10 00:04:12.268 EAL: Detected NUMA nodes: 1 00:04:12.268 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:12.268 EAL: Detected shared linkage of DPDK 00:04:12.268 EAL: No shared files mode enabled, IPC will be disabled 00:04:12.268 EAL: Selected IOVA mode 'PA' 00:04:12.268 EAL: Probing VFIO support... 00:04:12.268 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:12.268 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:12.268 EAL: Ask a virtual area of 0x2e000 bytes 00:04:12.268 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:12.268 EAL: Setting up physically contiguous memory... 00:04:12.268 EAL: Setting maximum number of open files to 524288 00:04:12.268 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:12.268 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:12.268 EAL: Ask a virtual area of 0x61000 bytes 00:04:12.268 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:12.268 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:12.268 EAL: Ask a virtual area of 0x400000000 bytes 00:04:12.268 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:12.268 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:12.268 EAL: Ask a virtual area of 0x61000 bytes 00:04:12.268 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:12.268 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:12.268 EAL: Ask a virtual area of 0x400000000 bytes 00:04:12.268 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:12.268 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:12.268 EAL: Ask a virtual area of 0x61000 bytes 00:04:12.268 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:12.268 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:12.268 EAL: Ask a virtual area of 0x400000000 bytes 00:04:12.268 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:12.268 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:12.268 EAL: Ask a virtual area of 0x61000 bytes 00:04:12.268 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:12.268 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:12.268 EAL: Ask a virtual area of 0x400000000 bytes 00:04:12.268 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:12.268 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:12.268 EAL: Hugepages will be freed exactly as allocated. 00:04:12.268 EAL: No shared files mode enabled, IPC is disabled 00:04:12.268 EAL: No shared files mode enabled, IPC is disabled 00:04:12.268 EAL: TSC frequency is ~2600000 KHz 00:04:12.268 EAL: Main lcore 0 is ready (tid=7fee9982ca40;cpuset=[0]) 00:04:12.268 EAL: Trying to obtain current memory policy. 00:04:12.268 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.268 EAL: Restoring previous memory policy: 0 00:04:12.268 EAL: request: mp_malloc_sync 00:04:12.268 EAL: No shared files mode enabled, IPC is disabled 00:04:12.268 EAL: Heap on socket 0 was expanded by 2MB 00:04:12.268 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:12.268 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:12.268 EAL: Mem event callback 'spdk:(nil)' registered 00:04:12.268 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:12.268 00:04:12.268 00:04:12.268 CUnit - A unit testing framework for C - Version 2.1-3 00:04:12.268 http://cunit.sourceforge.net/ 00:04:12.268 00:04:12.268 00:04:12.268 Suite: components_suite 00:04:12.835 Test: vtophys_malloc_test ...passed 00:04:12.835 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:12.835 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.835 EAL: Restoring previous memory policy: 4 00:04:12.835 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.835 EAL: request: mp_malloc_sync 00:04:12.835 EAL: No shared files mode enabled, IPC is disabled 00:04:12.835 EAL: Heap on socket 0 was expanded by 4MB 00:04:12.835 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.835 EAL: request: mp_malloc_sync 00:04:12.835 EAL: No shared files mode enabled, IPC is disabled 00:04:12.835 EAL: Heap on socket 0 was shrunk by 4MB 00:04:12.835 EAL: Trying to obtain current memory policy. 00:04:12.835 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.835 EAL: Restoring previous memory policy: 4 00:04:12.835 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.835 EAL: request: mp_malloc_sync 00:04:12.835 EAL: No shared files mode enabled, IPC is disabled 00:04:12.835 EAL: Heap on socket 0 was expanded by 6MB 00:04:12.835 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.835 EAL: request: mp_malloc_sync 00:04:12.835 EAL: No shared files mode enabled, IPC is disabled 00:04:12.835 EAL: Heap on socket 0 was shrunk by 6MB 00:04:12.835 EAL: Trying to obtain current memory policy. 00:04:12.835 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.835 EAL: Restoring previous memory policy: 4 00:04:12.835 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.835 EAL: request: mp_malloc_sync 00:04:12.835 EAL: No shared files mode enabled, IPC is disabled 00:04:12.835 EAL: Heap on socket 0 was expanded by 10MB 00:04:12.835 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.835 EAL: request: mp_malloc_sync 00:04:12.835 EAL: No shared files mode enabled, IPC is disabled 00:04:12.835 EAL: Heap on socket 0 was shrunk by 10MB 00:04:12.835 EAL: Trying to obtain current memory policy. 00:04:12.835 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.835 EAL: Restoring previous memory policy: 4 00:04:12.835 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.835 EAL: request: mp_malloc_sync 00:04:12.835 EAL: No shared files mode enabled, IPC is disabled 00:04:12.835 EAL: Heap on socket 0 was expanded by 18MB 00:04:12.835 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.835 EAL: request: mp_malloc_sync 00:04:12.835 EAL: No shared files mode enabled, IPC is disabled 00:04:12.835 EAL: Heap on socket 0 was shrunk by 18MB 00:04:12.835 EAL: Trying to obtain current memory policy. 00:04:12.835 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.835 EAL: Restoring previous memory policy: 4 00:04:12.835 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.835 EAL: request: mp_malloc_sync 00:04:12.835 EAL: No shared files mode enabled, IPC is disabled 00:04:12.835 EAL: Heap on socket 0 was expanded by 34MB 00:04:12.835 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.835 EAL: request: mp_malloc_sync 00:04:12.835 EAL: No shared files mode enabled, IPC is disabled 00:04:12.835 EAL: Heap on socket 0 was shrunk by 34MB 00:04:12.835 EAL: Trying to obtain current memory policy. 00:04:12.835 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.835 EAL: Restoring previous memory policy: 4 00:04:12.835 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.835 EAL: request: mp_malloc_sync 00:04:12.836 EAL: No shared files mode enabled, IPC is disabled 00:04:12.836 EAL: Heap on socket 0 was expanded by 66MB 00:04:12.836 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.836 EAL: request: mp_malloc_sync 00:04:12.836 EAL: No shared files mode enabled, IPC is disabled 00:04:12.836 EAL: Heap on socket 0 was shrunk by 66MB 00:04:13.094 EAL: Trying to obtain current memory policy. 00:04:13.094 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.094 EAL: Restoring previous memory policy: 4 00:04:13.094 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.094 EAL: request: mp_malloc_sync 00:04:13.094 EAL: No shared files mode enabled, IPC is disabled 00:04:13.094 EAL: Heap on socket 0 was expanded by 130MB 00:04:13.094 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.094 EAL: request: mp_malloc_sync 00:04:13.094 EAL: No shared files mode enabled, IPC is disabled 00:04:13.094 EAL: Heap on socket 0 was shrunk by 130MB 00:04:13.352 EAL: Trying to obtain current memory policy. 00:04:13.352 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.352 EAL: Restoring previous memory policy: 4 00:04:13.352 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.352 EAL: request: mp_malloc_sync 00:04:13.352 EAL: No shared files mode enabled, IPC is disabled 00:04:13.352 EAL: Heap on socket 0 was expanded by 258MB 00:04:13.610 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.610 EAL: request: mp_malloc_sync 00:04:13.610 EAL: No shared files mode enabled, IPC is disabled 00:04:13.610 EAL: Heap on socket 0 was shrunk by 258MB 00:04:13.868 EAL: Trying to obtain current memory policy. 00:04:13.868 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.868 EAL: Restoring previous memory policy: 4 00:04:13.868 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.868 EAL: request: mp_malloc_sync 00:04:13.868 EAL: No shared files mode enabled, IPC is disabled 00:04:13.868 EAL: Heap on socket 0 was expanded by 514MB 00:04:14.435 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.693 EAL: request: mp_malloc_sync 00:04:14.693 EAL: No shared files mode enabled, IPC is disabled 00:04:14.693 EAL: Heap on socket 0 was shrunk by 514MB 00:04:15.259 EAL: Trying to obtain current memory policy. 00:04:15.259 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.259 EAL: Restoring previous memory policy: 4 00:04:15.259 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.259 EAL: request: mp_malloc_sync 00:04:15.259 EAL: No shared files mode enabled, IPC is disabled 00:04:15.259 EAL: Heap on socket 0 was expanded by 1026MB 00:04:16.633 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.633 EAL: request: mp_malloc_sync 00:04:16.633 EAL: No shared files mode enabled, IPC is disabled 00:04:16.633 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:17.261 passed 00:04:17.261 00:04:17.261 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.261 suites 1 1 n/a 0 0 00:04:17.261 tests 2 2 2 0 0 00:04:17.261 asserts 5348 5348 5348 0 n/a 00:04:17.261 00:04:17.261 Elapsed time = 4.860 seconds 00:04:17.261 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.261 EAL: request: mp_malloc_sync 00:04:17.261 EAL: No shared files mode enabled, IPC is disabled 00:04:17.261 EAL: Heap on socket 0 was shrunk by 2MB 00:04:17.261 EAL: No shared files mode enabled, IPC is disabled 00:04:17.261 EAL: No shared files mode enabled, IPC is disabled 00:04:17.261 EAL: No shared files mode enabled, IPC is disabled 00:04:17.261 00:04:17.261 real 0m5.125s 00:04:17.261 user 0m4.324s 00:04:17.261 sys 0m0.643s 00:04:17.261 14:05:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:17.261 14:05:15 -- common/autotest_common.sh@10 -- # set +x 00:04:17.261 ************************************ 00:04:17.261 END TEST env_vtophys 00:04:17.261 ************************************ 00:04:17.261 14:05:15 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:17.261 14:05:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:17.261 14:05:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:17.261 14:05:15 -- common/autotest_common.sh@10 -- # set +x 00:04:17.519 ************************************ 00:04:17.519 START TEST env_pci 00:04:17.519 ************************************ 00:04:17.519 14:05:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:17.519 00:04:17.519 00:04:17.519 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.519 http://cunit.sourceforge.net/ 00:04:17.519 00:04:17.519 00:04:17.519 Suite: pci 00:04:17.519 Test: pci_hook ...[2024-11-19 14:05:15.851818] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56060 has claimed it 00:04:17.519 passed 00:04:17.519 00:04:17.519 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.519 suites 1 1 n/a 0 0 00:04:17.519 tests 1 1 1 0 0 00:04:17.519 asserts 25 25 25 0 n/a 00:04:17.519 00:04:17.519 Elapsed time = 0.007 seconds 00:04:17.519 EAL: Cannot find device (10000:00:01.0) 00:04:17.519 EAL: Failed to attach device on primary process 00:04:17.519 00:04:17.519 real 0m0.066s 00:04:17.519 user 0m0.034s 00:04:17.519 sys 0m0.031s 00:04:17.519 14:05:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:17.519 14:05:15 -- common/autotest_common.sh@10 -- # set +x 00:04:17.519 ************************************ 00:04:17.519 END TEST env_pci 00:04:17.519 ************************************ 00:04:17.519 14:05:15 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:17.519 14:05:15 -- env/env.sh@15 -- # uname 00:04:17.519 14:05:15 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:17.519 14:05:15 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:17.519 14:05:15 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:17.519 14:05:15 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:17.519 14:05:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:17.519 14:05:15 -- common/autotest_common.sh@10 -- # set +x 00:04:17.519 ************************************ 00:04:17.519 START TEST env_dpdk_post_init 00:04:17.519 ************************************ 00:04:17.519 14:05:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:17.519 EAL: Detected CPU lcores: 10 00:04:17.519 EAL: Detected NUMA nodes: 1 00:04:17.519 EAL: Detected shared linkage of DPDK 00:04:17.519 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:17.519 EAL: Selected IOVA mode 'PA' 00:04:17.777 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:17.778 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:04:17.778 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:04:17.778 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:08.0 (socket -1) 00:04:17.778 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:09.0 (socket -1) 00:04:17.778 Starting DPDK initialization... 00:04:17.778 Starting SPDK post initialization... 00:04:17.778 SPDK NVMe probe 00:04:17.778 Attaching to 0000:00:06.0 00:04:17.778 Attaching to 0000:00:07.0 00:04:17.778 Attaching to 0000:00:08.0 00:04:17.778 Attaching to 0000:00:09.0 00:04:17.778 Attached to 0000:00:06.0 00:04:17.778 Attached to 0000:00:07.0 00:04:17.778 Attached to 0000:00:09.0 00:04:17.778 Attached to 0000:00:08.0 00:04:17.778 Cleaning up... 00:04:17.778 00:04:17.778 real 0m0.216s 00:04:17.778 user 0m0.061s 00:04:17.778 sys 0m0.057s 00:04:17.778 14:05:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:17.778 14:05:16 -- common/autotest_common.sh@10 -- # set +x 00:04:17.778 ************************************ 00:04:17.778 END TEST env_dpdk_post_init 00:04:17.778 ************************************ 00:04:17.778 14:05:16 -- env/env.sh@26 -- # uname 00:04:17.778 14:05:16 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:17.778 14:05:16 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:17.778 14:05:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:17.778 14:05:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:17.778 14:05:16 -- common/autotest_common.sh@10 -- # set +x 00:04:17.778 ************************************ 00:04:17.778 START TEST env_mem_callbacks 00:04:17.778 ************************************ 00:04:17.778 14:05:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:17.778 EAL: Detected CPU lcores: 10 00:04:17.778 EAL: Detected NUMA nodes: 1 00:04:17.778 EAL: Detected shared linkage of DPDK 00:04:17.778 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:17.778 EAL: Selected IOVA mode 'PA' 00:04:18.035 00:04:18.035 00:04:18.035 CUnit - A unit testing framework for C - Version 2.1-3 00:04:18.035 http://cunit.sourceforge.net/ 00:04:18.035 00:04:18.035 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:18.035 00:04:18.035 Suite: memory 00:04:18.035 Test: test ... 00:04:18.035 register 0x200000200000 2097152 00:04:18.035 malloc 3145728 00:04:18.035 register 0x200000400000 4194304 00:04:18.035 buf 0x2000004fffc0 len 3145728 PASSED 00:04:18.035 malloc 64 00:04:18.035 buf 0x2000004ffec0 len 64 PASSED 00:04:18.035 malloc 4194304 00:04:18.035 register 0x200000800000 6291456 00:04:18.035 buf 0x2000009fffc0 len 4194304 PASSED 00:04:18.035 free 0x2000004fffc0 3145728 00:04:18.035 free 0x2000004ffec0 64 00:04:18.035 unregister 0x200000400000 4194304 PASSED 00:04:18.035 free 0x2000009fffc0 4194304 00:04:18.035 unregister 0x200000800000 6291456 PASSED 00:04:18.035 malloc 8388608 00:04:18.035 register 0x200000400000 10485760 00:04:18.035 buf 0x2000005fffc0 len 8388608 PASSED 00:04:18.035 free 0x2000005fffc0 8388608 00:04:18.035 unregister 0x200000400000 10485760 PASSED 00:04:18.035 passed 00:04:18.035 00:04:18.035 Run Summary: Type Total Ran Passed Failed Inactive 00:04:18.035 suites 1 1 n/a 0 0 00:04:18.035 tests 1 1 1 0 0 00:04:18.035 asserts 15 15 15 0 n/a 00:04:18.035 00:04:18.035 Elapsed time = 0.039 seconds 00:04:18.035 00:04:18.035 real 0m0.202s 00:04:18.035 user 0m0.061s 00:04:18.035 sys 0m0.040s 00:04:18.035 14:05:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:18.035 14:05:16 -- common/autotest_common.sh@10 -- # set +x 00:04:18.035 ************************************ 00:04:18.036 END TEST env_mem_callbacks 00:04:18.036 ************************************ 00:04:18.036 00:04:18.036 real 0m6.259s 00:04:18.036 user 0m4.868s 00:04:18.036 sys 0m0.985s 00:04:18.036 14:05:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:18.036 14:05:16 -- common/autotest_common.sh@10 -- # set +x 00:04:18.036 ************************************ 00:04:18.036 END TEST env 00:04:18.036 ************************************ 00:04:18.036 14:05:16 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:18.036 14:05:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:18.036 14:05:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:18.036 14:05:16 -- common/autotest_common.sh@10 -- # set +x 00:04:18.036 ************************************ 00:04:18.036 START TEST rpc 00:04:18.036 ************************************ 00:04:18.036 14:05:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:18.036 * Looking for test storage... 00:04:18.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:18.036 14:05:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:18.036 14:05:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:18.036 14:05:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:18.036 14:05:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:18.036 14:05:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:18.036 14:05:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:18.036 14:05:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:18.036 14:05:16 -- scripts/common.sh@335 -- # IFS=.-: 00:04:18.036 14:05:16 -- scripts/common.sh@335 -- # read -ra ver1 00:04:18.036 14:05:16 -- scripts/common.sh@336 -- # IFS=.-: 00:04:18.036 14:05:16 -- scripts/common.sh@336 -- # read -ra ver2 00:04:18.036 14:05:16 -- scripts/common.sh@337 -- # local 'op=<' 00:04:18.036 14:05:16 -- scripts/common.sh@339 -- # ver1_l=2 00:04:18.036 14:05:16 -- scripts/common.sh@340 -- # ver2_l=1 00:04:18.036 14:05:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:18.036 14:05:16 -- scripts/common.sh@343 -- # case "$op" in 00:04:18.036 14:05:16 -- scripts/common.sh@344 -- # : 1 00:04:18.036 14:05:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:18.036 14:05:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:18.036 14:05:16 -- scripts/common.sh@364 -- # decimal 1 00:04:18.036 14:05:16 -- scripts/common.sh@352 -- # local d=1 00:04:18.036 14:05:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:18.036 14:05:16 -- scripts/common.sh@354 -- # echo 1 00:04:18.036 14:05:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:18.036 14:05:16 -- scripts/common.sh@365 -- # decimal 2 00:04:18.036 14:05:16 -- scripts/common.sh@352 -- # local d=2 00:04:18.036 14:05:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:18.294 14:05:16 -- scripts/common.sh@354 -- # echo 2 00:04:18.294 14:05:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:18.294 14:05:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:18.294 14:05:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:18.294 14:05:16 -- scripts/common.sh@367 -- # return 0 00:04:18.294 14:05:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:18.294 14:05:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:18.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.294 --rc genhtml_branch_coverage=1 00:04:18.294 --rc genhtml_function_coverage=1 00:04:18.294 --rc genhtml_legend=1 00:04:18.294 --rc geninfo_all_blocks=1 00:04:18.294 --rc geninfo_unexecuted_blocks=1 00:04:18.294 00:04:18.294 ' 00:04:18.294 14:05:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:18.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.294 --rc genhtml_branch_coverage=1 00:04:18.294 --rc genhtml_function_coverage=1 00:04:18.294 --rc genhtml_legend=1 00:04:18.294 --rc geninfo_all_blocks=1 00:04:18.294 --rc geninfo_unexecuted_blocks=1 00:04:18.294 00:04:18.294 ' 00:04:18.294 14:05:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:18.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.294 --rc genhtml_branch_coverage=1 00:04:18.294 --rc genhtml_function_coverage=1 00:04:18.294 --rc genhtml_legend=1 00:04:18.294 --rc geninfo_all_blocks=1 00:04:18.294 --rc geninfo_unexecuted_blocks=1 00:04:18.294 00:04:18.294 ' 00:04:18.294 14:05:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:18.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.294 --rc genhtml_branch_coverage=1 00:04:18.294 --rc genhtml_function_coverage=1 00:04:18.294 --rc genhtml_legend=1 00:04:18.294 --rc geninfo_all_blocks=1 00:04:18.294 --rc geninfo_unexecuted_blocks=1 00:04:18.294 00:04:18.294 ' 00:04:18.294 14:05:16 -- rpc/rpc.sh@65 -- # spdk_pid=56186 00:04:18.294 14:05:16 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:18.294 14:05:16 -- rpc/rpc.sh@67 -- # waitforlisten 56186 00:04:18.294 14:05:16 -- common/autotest_common.sh@829 -- # '[' -z 56186 ']' 00:04:18.294 14:05:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:18.294 14:05:16 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:18.294 14:05:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:18.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:18.294 14:05:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:18.294 14:05:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:18.294 14:05:16 -- common/autotest_common.sh@10 -- # set +x 00:04:18.294 [2024-11-19 14:05:16.669135] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:18.294 [2024-11-19 14:05:16.669244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56186 ] 00:04:18.294 [2024-11-19 14:05:16.814853] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.553 [2024-11-19 14:05:16.954670] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:18.553 [2024-11-19 14:05:16.954819] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:18.553 [2024-11-19 14:05:16.954831] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56186' to capture a snapshot of events at runtime. 00:04:18.553 [2024-11-19 14:05:16.954838] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56186 for offline analysis/debug. 00:04:18.553 [2024-11-19 14:05:16.954864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.119 14:05:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:19.119 14:05:17 -- common/autotest_common.sh@862 -- # return 0 00:04:19.119 14:05:17 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:19.119 14:05:17 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:19.119 14:05:17 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:19.119 14:05:17 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:19.119 14:05:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:19.119 14:05:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:19.119 14:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:19.119 ************************************ 00:04:19.119 START TEST rpc_integrity 00:04:19.119 ************************************ 00:04:19.119 14:05:17 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:19.119 14:05:17 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:19.119 14:05:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.119 14:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:19.119 14:05:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.119 14:05:17 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:19.119 14:05:17 -- rpc/rpc.sh@13 -- # jq length 00:04:19.119 14:05:17 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:19.119 14:05:17 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:19.119 14:05:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.119 14:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:19.119 14:05:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.119 14:05:17 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:19.119 14:05:17 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:19.119 14:05:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.119 14:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:19.119 14:05:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.119 14:05:17 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:19.119 { 00:04:19.119 "name": "Malloc0", 00:04:19.119 "aliases": [ 00:04:19.119 "088f620d-0b8c-465a-be4a-03106c70ad2e" 00:04:19.119 ], 00:04:19.119 "product_name": "Malloc disk", 00:04:19.119 "block_size": 512, 00:04:19.119 "num_blocks": 16384, 00:04:19.119 "uuid": "088f620d-0b8c-465a-be4a-03106c70ad2e", 00:04:19.119 "assigned_rate_limits": { 00:04:19.119 "rw_ios_per_sec": 0, 00:04:19.119 "rw_mbytes_per_sec": 0, 00:04:19.119 "r_mbytes_per_sec": 0, 00:04:19.119 "w_mbytes_per_sec": 0 00:04:19.119 }, 00:04:19.119 "claimed": false, 00:04:19.119 "zoned": false, 00:04:19.119 "supported_io_types": { 00:04:19.119 "read": true, 00:04:19.119 "write": true, 00:04:19.119 "unmap": true, 00:04:19.119 "write_zeroes": true, 00:04:19.119 "flush": true, 00:04:19.119 "reset": true, 00:04:19.119 "compare": false, 00:04:19.119 "compare_and_write": false, 00:04:19.119 "abort": true, 00:04:19.119 "nvme_admin": false, 00:04:19.119 "nvme_io": false 00:04:19.119 }, 00:04:19.119 "memory_domains": [ 00:04:19.119 { 00:04:19.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.119 "dma_device_type": 2 00:04:19.119 } 00:04:19.119 ], 00:04:19.119 "driver_specific": {} 00:04:19.119 } 00:04:19.119 ]' 00:04:19.119 14:05:17 -- rpc/rpc.sh@17 -- # jq length 00:04:19.119 14:05:17 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:19.119 14:05:17 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:19.120 14:05:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.120 14:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:19.120 [2024-11-19 14:05:17.588037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:19.120 [2024-11-19 14:05:17.588090] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:19.120 [2024-11-19 14:05:17.588107] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:04:19.120 [2024-11-19 14:05:17.588116] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:19.120 [2024-11-19 14:05:17.589753] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:19.120 [2024-11-19 14:05:17.589788] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:19.120 Passthru0 00:04:19.120 14:05:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.120 14:05:17 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:19.120 14:05:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.120 14:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:19.120 14:05:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.120 14:05:17 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:19.120 { 00:04:19.120 "name": "Malloc0", 00:04:19.120 "aliases": [ 00:04:19.120 "088f620d-0b8c-465a-be4a-03106c70ad2e" 00:04:19.120 ], 00:04:19.120 "product_name": "Malloc disk", 00:04:19.120 "block_size": 512, 00:04:19.120 "num_blocks": 16384, 00:04:19.120 "uuid": "088f620d-0b8c-465a-be4a-03106c70ad2e", 00:04:19.120 "assigned_rate_limits": { 00:04:19.120 "rw_ios_per_sec": 0, 00:04:19.120 "rw_mbytes_per_sec": 0, 00:04:19.120 "r_mbytes_per_sec": 0, 00:04:19.120 "w_mbytes_per_sec": 0 00:04:19.120 }, 00:04:19.120 "claimed": true, 00:04:19.120 "claim_type": "exclusive_write", 00:04:19.120 "zoned": false, 00:04:19.120 "supported_io_types": { 00:04:19.120 "read": true, 00:04:19.120 "write": true, 00:04:19.120 "unmap": true, 00:04:19.120 "write_zeroes": true, 00:04:19.120 "flush": true, 00:04:19.120 "reset": true, 00:04:19.120 "compare": false, 00:04:19.120 "compare_and_write": false, 00:04:19.120 "abort": true, 00:04:19.120 "nvme_admin": false, 00:04:19.120 "nvme_io": false 00:04:19.120 }, 00:04:19.120 "memory_domains": [ 00:04:19.120 { 00:04:19.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.120 "dma_device_type": 2 00:04:19.120 } 00:04:19.120 ], 00:04:19.120 "driver_specific": {} 00:04:19.120 }, 00:04:19.120 { 00:04:19.120 "name": "Passthru0", 00:04:19.120 "aliases": [ 00:04:19.120 "5dc3746b-c4fe-5678-8820-ffb0dd36c0f0" 00:04:19.120 ], 00:04:19.120 "product_name": "passthru", 00:04:19.120 "block_size": 512, 00:04:19.120 "num_blocks": 16384, 00:04:19.120 "uuid": "5dc3746b-c4fe-5678-8820-ffb0dd36c0f0", 00:04:19.120 "assigned_rate_limits": { 00:04:19.120 "rw_ios_per_sec": 0, 00:04:19.120 "rw_mbytes_per_sec": 0, 00:04:19.120 "r_mbytes_per_sec": 0, 00:04:19.120 "w_mbytes_per_sec": 0 00:04:19.120 }, 00:04:19.120 "claimed": false, 00:04:19.120 "zoned": false, 00:04:19.120 "supported_io_types": { 00:04:19.120 "read": true, 00:04:19.120 "write": true, 00:04:19.120 "unmap": true, 00:04:19.120 "write_zeroes": true, 00:04:19.120 "flush": true, 00:04:19.120 "reset": true, 00:04:19.120 "compare": false, 00:04:19.120 "compare_and_write": false, 00:04:19.120 "abort": true, 00:04:19.120 "nvme_admin": false, 00:04:19.120 "nvme_io": false 00:04:19.120 }, 00:04:19.120 "memory_domains": [ 00:04:19.120 { 00:04:19.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.120 "dma_device_type": 2 00:04:19.120 } 00:04:19.120 ], 00:04:19.120 "driver_specific": { 00:04:19.120 "passthru": { 00:04:19.120 "name": "Passthru0", 00:04:19.120 "base_bdev_name": "Malloc0" 00:04:19.120 } 00:04:19.120 } 00:04:19.120 } 00:04:19.120 ]' 00:04:19.120 14:05:17 -- rpc/rpc.sh@21 -- # jq length 00:04:19.120 14:05:17 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:19.120 14:05:17 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:19.120 14:05:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.120 14:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:19.120 14:05:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.120 14:05:17 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:19.120 14:05:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.120 14:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:19.120 14:05:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.120 14:05:17 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:19.120 14:05:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.120 14:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:19.120 14:05:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.120 14:05:17 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:19.378 14:05:17 -- rpc/rpc.sh@26 -- # jq length 00:04:19.378 14:05:17 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:19.378 00:04:19.378 real 0m0.234s 00:04:19.378 user 0m0.124s 00:04:19.378 sys 0m0.036s 00:04:19.378 14:05:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:19.378 14:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:19.378 ************************************ 00:04:19.378 END TEST rpc_integrity 00:04:19.378 ************************************ 00:04:19.378 14:05:17 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:19.378 14:05:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:19.378 14:05:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:19.378 14:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:19.378 ************************************ 00:04:19.378 START TEST rpc_plugins 00:04:19.378 ************************************ 00:04:19.378 14:05:17 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:04:19.378 14:05:17 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:19.378 14:05:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.378 14:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:19.378 14:05:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.378 14:05:17 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:19.378 14:05:17 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:19.378 14:05:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.378 14:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:19.378 14:05:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.378 14:05:17 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:19.378 { 00:04:19.378 "name": "Malloc1", 00:04:19.378 "aliases": [ 00:04:19.378 "f0c8454b-44a3-4e5b-bc24-abe6d432339e" 00:04:19.378 ], 00:04:19.378 "product_name": "Malloc disk", 00:04:19.378 "block_size": 4096, 00:04:19.378 "num_blocks": 256, 00:04:19.378 "uuid": "f0c8454b-44a3-4e5b-bc24-abe6d432339e", 00:04:19.378 "assigned_rate_limits": { 00:04:19.378 "rw_ios_per_sec": 0, 00:04:19.378 "rw_mbytes_per_sec": 0, 00:04:19.378 "r_mbytes_per_sec": 0, 00:04:19.378 "w_mbytes_per_sec": 0 00:04:19.378 }, 00:04:19.378 "claimed": false, 00:04:19.378 "zoned": false, 00:04:19.378 "supported_io_types": { 00:04:19.379 "read": true, 00:04:19.379 "write": true, 00:04:19.379 "unmap": true, 00:04:19.379 "write_zeroes": true, 00:04:19.379 "flush": true, 00:04:19.379 "reset": true, 00:04:19.379 "compare": false, 00:04:19.379 "compare_and_write": false, 00:04:19.379 "abort": true, 00:04:19.379 "nvme_admin": false, 00:04:19.379 "nvme_io": false 00:04:19.379 }, 00:04:19.379 "memory_domains": [ 00:04:19.379 { 00:04:19.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.379 "dma_device_type": 2 00:04:19.379 } 00:04:19.379 ], 00:04:19.379 "driver_specific": {} 00:04:19.379 } 00:04:19.379 ]' 00:04:19.379 14:05:17 -- rpc/rpc.sh@32 -- # jq length 00:04:19.379 14:05:17 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:19.379 14:05:17 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:19.379 14:05:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.379 14:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:19.379 14:05:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.379 14:05:17 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:19.379 14:05:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.379 14:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:19.379 14:05:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.379 14:05:17 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:19.379 14:05:17 -- rpc/rpc.sh@36 -- # jq length 00:04:19.379 14:05:17 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:19.379 00:04:19.379 real 0m0.109s 00:04:19.379 user 0m0.062s 00:04:19.379 sys 0m0.019s 00:04:19.379 14:05:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:19.379 ************************************ 00:04:19.379 END TEST rpc_plugins 00:04:19.379 14:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:19.379 ************************************ 00:04:19.379 14:05:17 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:19.379 14:05:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:19.379 14:05:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:19.379 14:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:19.379 ************************************ 00:04:19.379 START TEST rpc_trace_cmd_test 00:04:19.379 ************************************ 00:04:19.379 14:05:17 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:04:19.379 14:05:17 -- rpc/rpc.sh@40 -- # local info 00:04:19.379 14:05:17 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:19.379 14:05:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.379 14:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:19.379 14:05:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.379 14:05:17 -- rpc/rpc.sh@42 -- # info='{ 00:04:19.379 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56186", 00:04:19.379 "tpoint_group_mask": "0x8", 00:04:19.379 "iscsi_conn": { 00:04:19.379 "mask": "0x2", 00:04:19.379 "tpoint_mask": "0x0" 00:04:19.379 }, 00:04:19.379 "scsi": { 00:04:19.379 "mask": "0x4", 00:04:19.379 "tpoint_mask": "0x0" 00:04:19.379 }, 00:04:19.379 "bdev": { 00:04:19.379 "mask": "0x8", 00:04:19.379 "tpoint_mask": "0xffffffffffffffff" 00:04:19.379 }, 00:04:19.379 "nvmf_rdma": { 00:04:19.379 "mask": "0x10", 00:04:19.379 "tpoint_mask": "0x0" 00:04:19.379 }, 00:04:19.379 "nvmf_tcp": { 00:04:19.379 "mask": "0x20", 00:04:19.379 "tpoint_mask": "0x0" 00:04:19.379 }, 00:04:19.379 "ftl": { 00:04:19.379 "mask": "0x40", 00:04:19.379 "tpoint_mask": "0x0" 00:04:19.379 }, 00:04:19.379 "blobfs": { 00:04:19.379 "mask": "0x80", 00:04:19.379 "tpoint_mask": "0x0" 00:04:19.379 }, 00:04:19.379 "dsa": { 00:04:19.379 "mask": "0x200", 00:04:19.379 "tpoint_mask": "0x0" 00:04:19.379 }, 00:04:19.379 "thread": { 00:04:19.379 "mask": "0x400", 00:04:19.379 "tpoint_mask": "0x0" 00:04:19.379 }, 00:04:19.379 "nvme_pcie": { 00:04:19.379 "mask": "0x800", 00:04:19.379 "tpoint_mask": "0x0" 00:04:19.379 }, 00:04:19.379 "iaa": { 00:04:19.379 "mask": "0x1000", 00:04:19.379 "tpoint_mask": "0x0" 00:04:19.379 }, 00:04:19.379 "nvme_tcp": { 00:04:19.379 "mask": "0x2000", 00:04:19.379 "tpoint_mask": "0x0" 00:04:19.379 }, 00:04:19.379 "bdev_nvme": { 00:04:19.379 "mask": "0x4000", 00:04:19.379 "tpoint_mask": "0x0" 00:04:19.379 } 00:04:19.379 }' 00:04:19.379 14:05:17 -- rpc/rpc.sh@43 -- # jq length 00:04:19.637 14:05:17 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:19.637 14:05:17 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:19.637 14:05:17 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:19.637 14:05:17 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:19.637 14:05:18 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:19.637 14:05:18 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:19.637 14:05:18 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:19.637 14:05:18 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:19.637 14:05:18 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:19.637 00:04:19.637 real 0m0.172s 00:04:19.637 user 0m0.145s 00:04:19.637 sys 0m0.020s 00:04:19.637 14:05:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:19.637 14:05:18 -- common/autotest_common.sh@10 -- # set +x 00:04:19.637 ************************************ 00:04:19.637 END TEST rpc_trace_cmd_test 00:04:19.637 ************************************ 00:04:19.637 14:05:18 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:19.637 14:05:18 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:19.637 14:05:18 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:19.637 14:05:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:19.637 14:05:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:19.637 14:05:18 -- common/autotest_common.sh@10 -- # set +x 00:04:19.637 ************************************ 00:04:19.637 START TEST rpc_daemon_integrity 00:04:19.637 ************************************ 00:04:19.637 14:05:18 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:19.637 14:05:18 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:19.637 14:05:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.637 14:05:18 -- common/autotest_common.sh@10 -- # set +x 00:04:19.637 14:05:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.637 14:05:18 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:19.637 14:05:18 -- rpc/rpc.sh@13 -- # jq length 00:04:19.637 14:05:18 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:19.637 14:05:18 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:19.637 14:05:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.637 14:05:18 -- common/autotest_common.sh@10 -- # set +x 00:04:19.637 14:05:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.637 14:05:18 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:19.637 14:05:18 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:19.637 14:05:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.637 14:05:18 -- common/autotest_common.sh@10 -- # set +x 00:04:19.637 14:05:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.637 14:05:18 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:19.637 { 00:04:19.637 "name": "Malloc2", 00:04:19.637 "aliases": [ 00:04:19.637 "e6083ef0-f19f-409d-b3e0-8749e71113bb" 00:04:19.637 ], 00:04:19.637 "product_name": "Malloc disk", 00:04:19.637 "block_size": 512, 00:04:19.637 "num_blocks": 16384, 00:04:19.637 "uuid": "e6083ef0-f19f-409d-b3e0-8749e71113bb", 00:04:19.637 "assigned_rate_limits": { 00:04:19.637 "rw_ios_per_sec": 0, 00:04:19.637 "rw_mbytes_per_sec": 0, 00:04:19.637 "r_mbytes_per_sec": 0, 00:04:19.637 "w_mbytes_per_sec": 0 00:04:19.637 }, 00:04:19.637 "claimed": false, 00:04:19.637 "zoned": false, 00:04:19.637 "supported_io_types": { 00:04:19.637 "read": true, 00:04:19.637 "write": true, 00:04:19.637 "unmap": true, 00:04:19.637 "write_zeroes": true, 00:04:19.637 "flush": true, 00:04:19.637 "reset": true, 00:04:19.637 "compare": false, 00:04:19.637 "compare_and_write": false, 00:04:19.638 "abort": true, 00:04:19.638 "nvme_admin": false, 00:04:19.638 "nvme_io": false 00:04:19.638 }, 00:04:19.638 "memory_domains": [ 00:04:19.638 { 00:04:19.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.638 "dma_device_type": 2 00:04:19.638 } 00:04:19.638 ], 00:04:19.638 "driver_specific": {} 00:04:19.638 } 00:04:19.638 ]' 00:04:19.638 14:05:18 -- rpc/rpc.sh@17 -- # jq length 00:04:19.896 14:05:18 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:19.896 14:05:18 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:19.896 14:05:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.896 14:05:18 -- common/autotest_common.sh@10 -- # set +x 00:04:19.896 [2024-11-19 14:05:18.212193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:19.896 [2024-11-19 14:05:18.212242] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:19.896 [2024-11-19 14:05:18.212256] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:04:19.896 [2024-11-19 14:05:18.212263] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:19.896 [2024-11-19 14:05:18.213904] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:19.896 [2024-11-19 14:05:18.213936] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:19.896 Passthru0 00:04:19.896 14:05:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.896 14:05:18 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:19.896 14:05:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.896 14:05:18 -- common/autotest_common.sh@10 -- # set +x 00:04:19.896 14:05:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.896 14:05:18 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:19.896 { 00:04:19.896 "name": "Malloc2", 00:04:19.896 "aliases": [ 00:04:19.896 "e6083ef0-f19f-409d-b3e0-8749e71113bb" 00:04:19.896 ], 00:04:19.896 "product_name": "Malloc disk", 00:04:19.896 "block_size": 512, 00:04:19.896 "num_blocks": 16384, 00:04:19.896 "uuid": "e6083ef0-f19f-409d-b3e0-8749e71113bb", 00:04:19.896 "assigned_rate_limits": { 00:04:19.896 "rw_ios_per_sec": 0, 00:04:19.896 "rw_mbytes_per_sec": 0, 00:04:19.896 "r_mbytes_per_sec": 0, 00:04:19.896 "w_mbytes_per_sec": 0 00:04:19.896 }, 00:04:19.896 "claimed": true, 00:04:19.896 "claim_type": "exclusive_write", 00:04:19.896 "zoned": false, 00:04:19.896 "supported_io_types": { 00:04:19.896 "read": true, 00:04:19.896 "write": true, 00:04:19.896 "unmap": true, 00:04:19.896 "write_zeroes": true, 00:04:19.896 "flush": true, 00:04:19.896 "reset": true, 00:04:19.896 "compare": false, 00:04:19.896 "compare_and_write": false, 00:04:19.896 "abort": true, 00:04:19.896 "nvme_admin": false, 00:04:19.896 "nvme_io": false 00:04:19.896 }, 00:04:19.896 "memory_domains": [ 00:04:19.896 { 00:04:19.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.896 "dma_device_type": 2 00:04:19.896 } 00:04:19.896 ], 00:04:19.896 "driver_specific": {} 00:04:19.896 }, 00:04:19.896 { 00:04:19.896 "name": "Passthru0", 00:04:19.896 "aliases": [ 00:04:19.896 "15807c45-3706-5902-b904-ad1f00498561" 00:04:19.896 ], 00:04:19.896 "product_name": "passthru", 00:04:19.896 "block_size": 512, 00:04:19.896 "num_blocks": 16384, 00:04:19.896 "uuid": "15807c45-3706-5902-b904-ad1f00498561", 00:04:19.896 "assigned_rate_limits": { 00:04:19.896 "rw_ios_per_sec": 0, 00:04:19.896 "rw_mbytes_per_sec": 0, 00:04:19.896 "r_mbytes_per_sec": 0, 00:04:19.896 "w_mbytes_per_sec": 0 00:04:19.896 }, 00:04:19.896 "claimed": false, 00:04:19.896 "zoned": false, 00:04:19.896 "supported_io_types": { 00:04:19.896 "read": true, 00:04:19.896 "write": true, 00:04:19.896 "unmap": true, 00:04:19.896 "write_zeroes": true, 00:04:19.896 "flush": true, 00:04:19.896 "reset": true, 00:04:19.896 "compare": false, 00:04:19.896 "compare_and_write": false, 00:04:19.896 "abort": true, 00:04:19.896 "nvme_admin": false, 00:04:19.896 "nvme_io": false 00:04:19.896 }, 00:04:19.896 "memory_domains": [ 00:04:19.896 { 00:04:19.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.896 "dma_device_type": 2 00:04:19.896 } 00:04:19.896 ], 00:04:19.896 "driver_specific": { 00:04:19.896 "passthru": { 00:04:19.896 "name": "Passthru0", 00:04:19.896 "base_bdev_name": "Malloc2" 00:04:19.896 } 00:04:19.896 } 00:04:19.896 } 00:04:19.896 ]' 00:04:19.896 14:05:18 -- rpc/rpc.sh@21 -- # jq length 00:04:19.896 14:05:18 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:19.896 14:05:18 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:19.896 14:05:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.896 14:05:18 -- common/autotest_common.sh@10 -- # set +x 00:04:19.896 14:05:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.896 14:05:18 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:19.896 14:05:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.896 14:05:18 -- common/autotest_common.sh@10 -- # set +x 00:04:19.896 14:05:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.896 14:05:18 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:19.896 14:05:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.896 14:05:18 -- common/autotest_common.sh@10 -- # set +x 00:04:19.896 14:05:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.896 14:05:18 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:19.896 14:05:18 -- rpc/rpc.sh@26 -- # jq length 00:04:19.896 14:05:18 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:19.896 00:04:19.896 real 0m0.232s 00:04:19.896 user 0m0.122s 00:04:19.896 sys 0m0.031s 00:04:19.896 14:05:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:19.896 14:05:18 -- common/autotest_common.sh@10 -- # set +x 00:04:19.897 ************************************ 00:04:19.897 END TEST rpc_daemon_integrity 00:04:19.897 ************************************ 00:04:19.897 14:05:18 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:19.897 14:05:18 -- rpc/rpc.sh@84 -- # killprocess 56186 00:04:19.897 14:05:18 -- common/autotest_common.sh@936 -- # '[' -z 56186 ']' 00:04:19.897 14:05:18 -- common/autotest_common.sh@940 -- # kill -0 56186 00:04:19.897 14:05:18 -- common/autotest_common.sh@941 -- # uname 00:04:19.897 14:05:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:19.897 14:05:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56186 00:04:19.897 14:05:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:19.897 killing process with pid 56186 00:04:19.897 14:05:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:19.897 14:05:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56186' 00:04:19.897 14:05:18 -- common/autotest_common.sh@955 -- # kill 56186 00:04:19.897 14:05:18 -- common/autotest_common.sh@960 -- # wait 56186 00:04:21.270 00:04:21.270 real 0m3.110s 00:04:21.270 user 0m3.547s 00:04:21.270 sys 0m0.537s 00:04:21.270 14:05:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:21.270 14:05:19 -- common/autotest_common.sh@10 -- # set +x 00:04:21.270 ************************************ 00:04:21.270 END TEST rpc 00:04:21.270 ************************************ 00:04:21.270 14:05:19 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:21.270 14:05:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:21.270 14:05:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:21.270 14:05:19 -- common/autotest_common.sh@10 -- # set +x 00:04:21.270 ************************************ 00:04:21.270 START TEST rpc_client 00:04:21.270 ************************************ 00:04:21.270 14:05:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:21.270 * Looking for test storage... 00:04:21.270 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:21.270 14:05:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:21.270 14:05:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:21.270 14:05:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:21.270 14:05:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:21.270 14:05:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:21.270 14:05:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:21.270 14:05:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:21.270 14:05:19 -- scripts/common.sh@335 -- # IFS=.-: 00:04:21.270 14:05:19 -- scripts/common.sh@335 -- # read -ra ver1 00:04:21.270 14:05:19 -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.270 14:05:19 -- scripts/common.sh@336 -- # read -ra ver2 00:04:21.270 14:05:19 -- scripts/common.sh@337 -- # local 'op=<' 00:04:21.270 14:05:19 -- scripts/common.sh@339 -- # ver1_l=2 00:04:21.270 14:05:19 -- scripts/common.sh@340 -- # ver2_l=1 00:04:21.270 14:05:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:21.270 14:05:19 -- scripts/common.sh@343 -- # case "$op" in 00:04:21.270 14:05:19 -- scripts/common.sh@344 -- # : 1 00:04:21.270 14:05:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:21.270 14:05:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.270 14:05:19 -- scripts/common.sh@364 -- # decimal 1 00:04:21.270 14:05:19 -- scripts/common.sh@352 -- # local d=1 00:04:21.270 14:05:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.270 14:05:19 -- scripts/common.sh@354 -- # echo 1 00:04:21.270 14:05:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:21.270 14:05:19 -- scripts/common.sh@365 -- # decimal 2 00:04:21.270 14:05:19 -- scripts/common.sh@352 -- # local d=2 00:04:21.270 14:05:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.270 14:05:19 -- scripts/common.sh@354 -- # echo 2 00:04:21.270 14:05:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:21.270 14:05:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:21.270 14:05:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:21.270 14:05:19 -- scripts/common.sh@367 -- # return 0 00:04:21.270 14:05:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.270 14:05:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:21.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.270 --rc genhtml_branch_coverage=1 00:04:21.270 --rc genhtml_function_coverage=1 00:04:21.270 --rc genhtml_legend=1 00:04:21.270 --rc geninfo_all_blocks=1 00:04:21.270 --rc geninfo_unexecuted_blocks=1 00:04:21.270 00:04:21.270 ' 00:04:21.270 14:05:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:21.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.270 --rc genhtml_branch_coverage=1 00:04:21.270 --rc genhtml_function_coverage=1 00:04:21.270 --rc genhtml_legend=1 00:04:21.270 --rc geninfo_all_blocks=1 00:04:21.270 --rc geninfo_unexecuted_blocks=1 00:04:21.270 00:04:21.270 ' 00:04:21.270 14:05:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:21.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.270 --rc genhtml_branch_coverage=1 00:04:21.270 --rc genhtml_function_coverage=1 00:04:21.270 --rc genhtml_legend=1 00:04:21.270 --rc geninfo_all_blocks=1 00:04:21.270 --rc geninfo_unexecuted_blocks=1 00:04:21.270 00:04:21.270 ' 00:04:21.270 14:05:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:21.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.270 --rc genhtml_branch_coverage=1 00:04:21.270 --rc genhtml_function_coverage=1 00:04:21.270 --rc genhtml_legend=1 00:04:21.270 --rc geninfo_all_blocks=1 00:04:21.270 --rc geninfo_unexecuted_blocks=1 00:04:21.270 00:04:21.270 ' 00:04:21.270 14:05:19 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:21.270 OK 00:04:21.270 14:05:19 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:21.270 00:04:21.270 real 0m0.178s 00:04:21.270 user 0m0.104s 00:04:21.270 sys 0m0.081s 00:04:21.270 14:05:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:21.271 14:05:19 -- common/autotest_common.sh@10 -- # set +x 00:04:21.271 ************************************ 00:04:21.271 END TEST rpc_client 00:04:21.271 ************************************ 00:04:21.271 14:05:19 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:21.271 14:05:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:21.271 14:05:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:21.271 14:05:19 -- common/autotest_common.sh@10 -- # set +x 00:04:21.271 ************************************ 00:04:21.271 START TEST json_config 00:04:21.271 ************************************ 00:04:21.271 14:05:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:21.531 14:05:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:21.531 14:05:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:21.531 14:05:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:21.531 14:05:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:21.531 14:05:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:21.531 14:05:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:21.531 14:05:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:21.531 14:05:19 -- scripts/common.sh@335 -- # IFS=.-: 00:04:21.531 14:05:19 -- scripts/common.sh@335 -- # read -ra ver1 00:04:21.531 14:05:19 -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.531 14:05:19 -- scripts/common.sh@336 -- # read -ra ver2 00:04:21.531 14:05:19 -- scripts/common.sh@337 -- # local 'op=<' 00:04:21.531 14:05:19 -- scripts/common.sh@339 -- # ver1_l=2 00:04:21.531 14:05:19 -- scripts/common.sh@340 -- # ver2_l=1 00:04:21.531 14:05:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:21.531 14:05:19 -- scripts/common.sh@343 -- # case "$op" in 00:04:21.532 14:05:19 -- scripts/common.sh@344 -- # : 1 00:04:21.532 14:05:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:21.532 14:05:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.532 14:05:19 -- scripts/common.sh@364 -- # decimal 1 00:04:21.532 14:05:19 -- scripts/common.sh@352 -- # local d=1 00:04:21.532 14:05:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.532 14:05:19 -- scripts/common.sh@354 -- # echo 1 00:04:21.532 14:05:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:21.532 14:05:19 -- scripts/common.sh@365 -- # decimal 2 00:04:21.532 14:05:19 -- scripts/common.sh@352 -- # local d=2 00:04:21.532 14:05:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.532 14:05:19 -- scripts/common.sh@354 -- # echo 2 00:04:21.532 14:05:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:21.532 14:05:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:21.532 14:05:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:21.532 14:05:19 -- scripts/common.sh@367 -- # return 0 00:04:21.532 14:05:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.532 14:05:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:21.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.532 --rc genhtml_branch_coverage=1 00:04:21.532 --rc genhtml_function_coverage=1 00:04:21.532 --rc genhtml_legend=1 00:04:21.532 --rc geninfo_all_blocks=1 00:04:21.532 --rc geninfo_unexecuted_blocks=1 00:04:21.532 00:04:21.532 ' 00:04:21.532 14:05:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:21.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.532 --rc genhtml_branch_coverage=1 00:04:21.532 --rc genhtml_function_coverage=1 00:04:21.532 --rc genhtml_legend=1 00:04:21.532 --rc geninfo_all_blocks=1 00:04:21.532 --rc geninfo_unexecuted_blocks=1 00:04:21.532 00:04:21.532 ' 00:04:21.532 14:05:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:21.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.532 --rc genhtml_branch_coverage=1 00:04:21.532 --rc genhtml_function_coverage=1 00:04:21.532 --rc genhtml_legend=1 00:04:21.532 --rc geninfo_all_blocks=1 00:04:21.532 --rc geninfo_unexecuted_blocks=1 00:04:21.532 00:04:21.532 ' 00:04:21.532 14:05:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:21.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.532 --rc genhtml_branch_coverage=1 00:04:21.532 --rc genhtml_function_coverage=1 00:04:21.532 --rc genhtml_legend=1 00:04:21.532 --rc geninfo_all_blocks=1 00:04:21.532 --rc geninfo_unexecuted_blocks=1 00:04:21.532 00:04:21.532 ' 00:04:21.532 14:05:19 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:21.532 14:05:19 -- nvmf/common.sh@7 -- # uname -s 00:04:21.532 14:05:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:21.532 14:05:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:21.532 14:05:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:21.532 14:05:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:21.532 14:05:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:21.532 14:05:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:21.532 14:05:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:21.532 14:05:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:21.532 14:05:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:21.532 14:05:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:21.532 14:05:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e08bbce-c901-475f-81a4-7c34959d137c 00:04:21.532 14:05:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=1e08bbce-c901-475f-81a4-7c34959d137c 00:04:21.532 14:05:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:21.532 14:05:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:21.532 14:05:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:21.532 14:05:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:21.532 14:05:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:21.532 14:05:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:21.532 14:05:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:21.532 14:05:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.532 14:05:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.532 14:05:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.532 14:05:19 -- paths/export.sh@5 -- # export PATH 00:04:21.532 14:05:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.532 14:05:19 -- nvmf/common.sh@46 -- # : 0 00:04:21.532 14:05:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:21.532 14:05:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:21.532 14:05:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:21.532 14:05:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:21.532 14:05:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:21.532 14:05:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:21.532 14:05:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:21.532 14:05:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:21.532 14:05:19 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:21.532 14:05:19 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:21.532 14:05:19 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:21.532 14:05:19 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:21.532 WARNING: No tests are enabled so not running JSON configuration tests 00:04:21.532 14:05:19 -- json_config/json_config.sh@26 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:21.532 14:05:19 -- json_config/json_config.sh@27 -- # exit 0 00:04:21.532 00:04:21.532 real 0m0.128s 00:04:21.532 user 0m0.088s 00:04:21.532 sys 0m0.043s 00:04:21.532 14:05:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:21.532 14:05:19 -- common/autotest_common.sh@10 -- # set +x 00:04:21.532 ************************************ 00:04:21.532 END TEST json_config 00:04:21.532 ************************************ 00:04:21.532 14:05:19 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:21.532 14:05:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:21.532 14:05:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:21.532 14:05:19 -- common/autotest_common.sh@10 -- # set +x 00:04:21.532 ************************************ 00:04:21.532 START TEST json_config_extra_key 00:04:21.532 ************************************ 00:04:21.532 14:05:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:21.532 14:05:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:21.532 14:05:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:21.532 14:05:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:21.532 14:05:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:21.532 14:05:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:21.532 14:05:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:21.532 14:05:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:21.532 14:05:20 -- scripts/common.sh@335 -- # IFS=.-: 00:04:21.533 14:05:20 -- scripts/common.sh@335 -- # read -ra ver1 00:04:21.533 14:05:20 -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.533 14:05:20 -- scripts/common.sh@336 -- # read -ra ver2 00:04:21.533 14:05:20 -- scripts/common.sh@337 -- # local 'op=<' 00:04:21.533 14:05:20 -- scripts/common.sh@339 -- # ver1_l=2 00:04:21.533 14:05:20 -- scripts/common.sh@340 -- # ver2_l=1 00:04:21.533 14:05:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:21.533 14:05:20 -- scripts/common.sh@343 -- # case "$op" in 00:04:21.533 14:05:20 -- scripts/common.sh@344 -- # : 1 00:04:21.533 14:05:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:21.533 14:05:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.533 14:05:20 -- scripts/common.sh@364 -- # decimal 1 00:04:21.533 14:05:20 -- scripts/common.sh@352 -- # local d=1 00:04:21.533 14:05:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.533 14:05:20 -- scripts/common.sh@354 -- # echo 1 00:04:21.533 14:05:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:21.533 14:05:20 -- scripts/common.sh@365 -- # decimal 2 00:04:21.791 14:05:20 -- scripts/common.sh@352 -- # local d=2 00:04:21.791 14:05:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.791 14:05:20 -- scripts/common.sh@354 -- # echo 2 00:04:21.791 14:05:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:21.791 14:05:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:21.791 14:05:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:21.791 14:05:20 -- scripts/common.sh@367 -- # return 0 00:04:21.791 14:05:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.791 14:05:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:21.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.791 --rc genhtml_branch_coverage=1 00:04:21.792 --rc genhtml_function_coverage=1 00:04:21.792 --rc genhtml_legend=1 00:04:21.792 --rc geninfo_all_blocks=1 00:04:21.792 --rc geninfo_unexecuted_blocks=1 00:04:21.792 00:04:21.792 ' 00:04:21.792 14:05:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:21.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.792 --rc genhtml_branch_coverage=1 00:04:21.792 --rc genhtml_function_coverage=1 00:04:21.792 --rc genhtml_legend=1 00:04:21.792 --rc geninfo_all_blocks=1 00:04:21.792 --rc geninfo_unexecuted_blocks=1 00:04:21.792 00:04:21.792 ' 00:04:21.792 14:05:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:21.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.792 --rc genhtml_branch_coverage=1 00:04:21.792 --rc genhtml_function_coverage=1 00:04:21.792 --rc genhtml_legend=1 00:04:21.792 --rc geninfo_all_blocks=1 00:04:21.792 --rc geninfo_unexecuted_blocks=1 00:04:21.792 00:04:21.792 ' 00:04:21.792 14:05:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:21.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.792 --rc genhtml_branch_coverage=1 00:04:21.792 --rc genhtml_function_coverage=1 00:04:21.792 --rc genhtml_legend=1 00:04:21.792 --rc geninfo_all_blocks=1 00:04:21.792 --rc geninfo_unexecuted_blocks=1 00:04:21.792 00:04:21.792 ' 00:04:21.792 14:05:20 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:21.792 14:05:20 -- nvmf/common.sh@7 -- # uname -s 00:04:21.792 14:05:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:21.792 14:05:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:21.792 14:05:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:21.792 14:05:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:21.792 14:05:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:21.792 14:05:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:21.792 14:05:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:21.792 14:05:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:21.792 14:05:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:21.792 14:05:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:21.792 14:05:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e08bbce-c901-475f-81a4-7c34959d137c 00:04:21.792 14:05:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=1e08bbce-c901-475f-81a4-7c34959d137c 00:04:21.792 14:05:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:21.792 14:05:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:21.792 14:05:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:21.792 14:05:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:21.792 14:05:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:21.792 14:05:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:21.792 14:05:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:21.792 14:05:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.792 14:05:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.792 14:05:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.792 14:05:20 -- paths/export.sh@5 -- # export PATH 00:04:21.792 14:05:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.792 14:05:20 -- nvmf/common.sh@46 -- # : 0 00:04:21.792 14:05:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:21.792 14:05:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:21.792 14:05:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:21.792 14:05:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:21.792 14:05:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:21.792 14:05:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:21.792 14:05:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:21.792 14:05:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:21.792 14:05:20 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:04:21.792 14:05:20 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:04:21.792 14:05:20 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:21.792 14:05:20 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:04:21.792 14:05:20 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:21.792 14:05:20 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:04:21.792 14:05:20 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:21.792 14:05:20 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:04:21.792 14:05:20 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:21.792 14:05:20 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:04:21.792 INFO: launching applications... 00:04:21.792 14:05:20 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:21.792 14:05:20 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:04:21.792 14:05:20 -- json_config/json_config_extra_key.sh@25 -- # shift 00:04:21.792 14:05:20 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:04:21.792 14:05:20 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:04:21.792 14:05:20 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=56485 00:04:21.792 14:05:20 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:04:21.792 Waiting for target to run... 00:04:21.792 14:05:20 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 56485 /var/tmp/spdk_tgt.sock 00:04:21.792 14:05:20 -- common/autotest_common.sh@829 -- # '[' -z 56485 ']' 00:04:21.792 14:05:20 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:21.792 14:05:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:21.792 14:05:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:21.792 14:05:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:21.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:21.792 14:05:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:21.792 14:05:20 -- common/autotest_common.sh@10 -- # set +x 00:04:21.792 [2024-11-19 14:05:20.185479] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:21.792 [2024-11-19 14:05:20.185588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56485 ] 00:04:22.050 [2024-11-19 14:05:20.487045] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.307 [2024-11-19 14:05:20.653775] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:22.307 [2024-11-19 14:05:20.653999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.242 00:04:23.242 INFO: shutting down applications... 00:04:23.242 14:05:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:23.242 14:05:21 -- common/autotest_common.sh@862 -- # return 0 00:04:23.242 14:05:21 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:04:23.242 14:05:21 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:04:23.242 14:05:21 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:04:23.242 14:05:21 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:04:23.242 14:05:21 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:04:23.242 14:05:21 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 56485 ]] 00:04:23.242 14:05:21 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 56485 00:04:23.242 14:05:21 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:04:23.242 14:05:21 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:23.242 14:05:21 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56485 00:04:23.242 14:05:21 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:23.807 14:05:22 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:23.807 14:05:22 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:23.807 14:05:22 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56485 00:04:23.807 14:05:22 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:24.374 14:05:22 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:24.374 14:05:22 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:24.374 14:05:22 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56485 00:04:24.374 14:05:22 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:24.941 14:05:23 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:24.941 14:05:23 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:24.941 14:05:23 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56485 00:04:24.941 14:05:23 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:24.941 14:05:23 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:24.941 14:05:23 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:24.941 14:05:23 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:24.941 SPDK target shutdown done 00:04:24.941 Success 00:04:24.941 14:05:23 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:24.941 00:04:24.941 real 0m3.215s 00:04:24.941 user 0m3.229s 00:04:24.941 sys 0m0.391s 00:04:24.941 14:05:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:24.941 14:05:23 -- common/autotest_common.sh@10 -- # set +x 00:04:24.941 ************************************ 00:04:24.941 END TEST json_config_extra_key 00:04:24.941 ************************************ 00:04:24.941 14:05:23 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:24.941 14:05:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:24.941 14:05:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:24.941 14:05:23 -- common/autotest_common.sh@10 -- # set +x 00:04:24.941 ************************************ 00:04:24.941 START TEST alias_rpc 00:04:24.941 ************************************ 00:04:24.941 14:05:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:24.941 * Looking for test storage... 00:04:24.941 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:24.941 14:05:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:24.941 14:05:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:24.941 14:05:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:24.941 14:05:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:24.941 14:05:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:24.941 14:05:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:24.941 14:05:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:24.941 14:05:23 -- scripts/common.sh@335 -- # IFS=.-: 00:04:24.941 14:05:23 -- scripts/common.sh@335 -- # read -ra ver1 00:04:24.941 14:05:23 -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.941 14:05:23 -- scripts/common.sh@336 -- # read -ra ver2 00:04:24.941 14:05:23 -- scripts/common.sh@337 -- # local 'op=<' 00:04:24.941 14:05:23 -- scripts/common.sh@339 -- # ver1_l=2 00:04:24.941 14:05:23 -- scripts/common.sh@340 -- # ver2_l=1 00:04:24.941 14:05:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:24.941 14:05:23 -- scripts/common.sh@343 -- # case "$op" in 00:04:24.941 14:05:23 -- scripts/common.sh@344 -- # : 1 00:04:24.941 14:05:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:24.941 14:05:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.941 14:05:23 -- scripts/common.sh@364 -- # decimal 1 00:04:24.941 14:05:23 -- scripts/common.sh@352 -- # local d=1 00:04:24.941 14:05:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.941 14:05:23 -- scripts/common.sh@354 -- # echo 1 00:04:24.941 14:05:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:24.941 14:05:23 -- scripts/common.sh@365 -- # decimal 2 00:04:24.941 14:05:23 -- scripts/common.sh@352 -- # local d=2 00:04:24.941 14:05:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.941 14:05:23 -- scripts/common.sh@354 -- # echo 2 00:04:24.941 14:05:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:24.941 14:05:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:24.941 14:05:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:24.941 14:05:23 -- scripts/common.sh@367 -- # return 0 00:04:24.941 14:05:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.941 14:05:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:24.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.941 --rc genhtml_branch_coverage=1 00:04:24.941 --rc genhtml_function_coverage=1 00:04:24.941 --rc genhtml_legend=1 00:04:24.941 --rc geninfo_all_blocks=1 00:04:24.941 --rc geninfo_unexecuted_blocks=1 00:04:24.941 00:04:24.941 ' 00:04:24.941 14:05:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:24.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.941 --rc genhtml_branch_coverage=1 00:04:24.942 --rc genhtml_function_coverage=1 00:04:24.942 --rc genhtml_legend=1 00:04:24.942 --rc geninfo_all_blocks=1 00:04:24.942 --rc geninfo_unexecuted_blocks=1 00:04:24.942 00:04:24.942 ' 00:04:24.942 14:05:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:24.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.942 --rc genhtml_branch_coverage=1 00:04:24.942 --rc genhtml_function_coverage=1 00:04:24.942 --rc genhtml_legend=1 00:04:24.942 --rc geninfo_all_blocks=1 00:04:24.942 --rc geninfo_unexecuted_blocks=1 00:04:24.942 00:04:24.942 ' 00:04:24.942 14:05:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:24.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.942 --rc genhtml_branch_coverage=1 00:04:24.942 --rc genhtml_function_coverage=1 00:04:24.942 --rc genhtml_legend=1 00:04:24.942 --rc geninfo_all_blocks=1 00:04:24.942 --rc geninfo_unexecuted_blocks=1 00:04:24.942 00:04:24.942 ' 00:04:24.942 14:05:23 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:24.942 14:05:23 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=56584 00:04:24.942 14:05:23 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 56584 00:04:24.942 14:05:23 -- common/autotest_common.sh@829 -- # '[' -z 56584 ']' 00:04:24.942 14:05:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.942 14:05:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:24.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.942 14:05:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.942 14:05:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:24.942 14:05:23 -- common/autotest_common.sh@10 -- # set +x 00:04:24.942 14:05:23 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:24.942 [2024-11-19 14:05:23.446449] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:24.942 [2024-11-19 14:05:23.446560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56584 ] 00:04:25.200 [2024-11-19 14:05:23.597457] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.458 [2024-11-19 14:05:23.768246] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:25.459 [2024-11-19 14:05:23.768445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.453 14:05:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:26.453 14:05:24 -- common/autotest_common.sh@862 -- # return 0 00:04:26.453 14:05:24 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:26.709 14:05:25 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 56584 00:04:26.709 14:05:25 -- common/autotest_common.sh@936 -- # '[' -z 56584 ']' 00:04:26.709 14:05:25 -- common/autotest_common.sh@940 -- # kill -0 56584 00:04:26.709 14:05:25 -- common/autotest_common.sh@941 -- # uname 00:04:26.709 14:05:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:26.709 14:05:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56584 00:04:26.709 14:05:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:26.709 14:05:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:26.709 14:05:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56584' 00:04:26.709 killing process with pid 56584 00:04:26.709 14:05:25 -- common/autotest_common.sh@955 -- # kill 56584 00:04:26.709 14:05:25 -- common/autotest_common.sh@960 -- # wait 56584 00:04:28.087 00:04:28.087 real 0m3.078s 00:04:28.087 user 0m3.273s 00:04:28.087 sys 0m0.415s 00:04:28.087 14:05:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:28.087 ************************************ 00:04:28.087 END TEST alias_rpc 00:04:28.087 ************************************ 00:04:28.087 14:05:26 -- common/autotest_common.sh@10 -- # set +x 00:04:28.087 14:05:26 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:04:28.087 14:05:26 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:28.087 14:05:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:28.087 14:05:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:28.087 14:05:26 -- common/autotest_common.sh@10 -- # set +x 00:04:28.087 ************************************ 00:04:28.087 START TEST spdkcli_tcp 00:04:28.087 ************************************ 00:04:28.087 14:05:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:28.087 * Looking for test storage... 00:04:28.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:28.087 14:05:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:28.087 14:05:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:28.087 14:05:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:28.087 14:05:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:28.087 14:05:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:28.087 14:05:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:28.087 14:05:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:28.087 14:05:26 -- scripts/common.sh@335 -- # IFS=.-: 00:04:28.087 14:05:26 -- scripts/common.sh@335 -- # read -ra ver1 00:04:28.087 14:05:26 -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.087 14:05:26 -- scripts/common.sh@336 -- # read -ra ver2 00:04:28.087 14:05:26 -- scripts/common.sh@337 -- # local 'op=<' 00:04:28.087 14:05:26 -- scripts/common.sh@339 -- # ver1_l=2 00:04:28.087 14:05:26 -- scripts/common.sh@340 -- # ver2_l=1 00:04:28.087 14:05:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:28.087 14:05:26 -- scripts/common.sh@343 -- # case "$op" in 00:04:28.087 14:05:26 -- scripts/common.sh@344 -- # : 1 00:04:28.087 14:05:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:28.087 14:05:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.087 14:05:26 -- scripts/common.sh@364 -- # decimal 1 00:04:28.087 14:05:26 -- scripts/common.sh@352 -- # local d=1 00:04:28.087 14:05:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.087 14:05:26 -- scripts/common.sh@354 -- # echo 1 00:04:28.087 14:05:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:28.087 14:05:26 -- scripts/common.sh@365 -- # decimal 2 00:04:28.087 14:05:26 -- scripts/common.sh@352 -- # local d=2 00:04:28.087 14:05:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.087 14:05:26 -- scripts/common.sh@354 -- # echo 2 00:04:28.087 14:05:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:28.087 14:05:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:28.087 14:05:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:28.087 14:05:26 -- scripts/common.sh@367 -- # return 0 00:04:28.087 14:05:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.087 14:05:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:28.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.087 --rc genhtml_branch_coverage=1 00:04:28.087 --rc genhtml_function_coverage=1 00:04:28.087 --rc genhtml_legend=1 00:04:28.087 --rc geninfo_all_blocks=1 00:04:28.087 --rc geninfo_unexecuted_blocks=1 00:04:28.087 00:04:28.087 ' 00:04:28.087 14:05:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:28.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.087 --rc genhtml_branch_coverage=1 00:04:28.087 --rc genhtml_function_coverage=1 00:04:28.087 --rc genhtml_legend=1 00:04:28.087 --rc geninfo_all_blocks=1 00:04:28.087 --rc geninfo_unexecuted_blocks=1 00:04:28.087 00:04:28.087 ' 00:04:28.087 14:05:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:28.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.087 --rc genhtml_branch_coverage=1 00:04:28.087 --rc genhtml_function_coverage=1 00:04:28.087 --rc genhtml_legend=1 00:04:28.087 --rc geninfo_all_blocks=1 00:04:28.087 --rc geninfo_unexecuted_blocks=1 00:04:28.087 00:04:28.087 ' 00:04:28.087 14:05:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:28.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.087 --rc genhtml_branch_coverage=1 00:04:28.087 --rc genhtml_function_coverage=1 00:04:28.087 --rc genhtml_legend=1 00:04:28.087 --rc geninfo_all_blocks=1 00:04:28.087 --rc geninfo_unexecuted_blocks=1 00:04:28.087 00:04:28.087 ' 00:04:28.087 14:05:26 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:28.087 14:05:26 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:28.087 14:05:26 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:28.087 14:05:26 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:28.087 14:05:26 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:28.087 14:05:26 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:28.087 14:05:26 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:28.087 14:05:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:28.087 14:05:26 -- common/autotest_common.sh@10 -- # set +x 00:04:28.087 14:05:26 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=56681 00:04:28.087 14:05:26 -- spdkcli/tcp.sh@27 -- # waitforlisten 56681 00:04:28.087 14:05:26 -- common/autotest_common.sh@829 -- # '[' -z 56681 ']' 00:04:28.087 14:05:26 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:28.087 14:05:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.088 14:05:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:28.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.088 14:05:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.088 14:05:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:28.088 14:05:26 -- common/autotest_common.sh@10 -- # set +x 00:04:28.088 [2024-11-19 14:05:26.567309] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:28.088 [2024-11-19 14:05:26.567414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56681 ] 00:04:28.346 [2024-11-19 14:05:26.705127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:28.346 [2024-11-19 14:05:26.845225] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:28.346 [2024-11-19 14:05:26.845558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.346 [2024-11-19 14:05:26.845587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:28.912 14:05:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:28.912 14:05:27 -- common/autotest_common.sh@862 -- # return 0 00:04:28.912 14:05:27 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:28.912 14:05:27 -- spdkcli/tcp.sh@31 -- # socat_pid=56698 00:04:28.912 14:05:27 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:29.171 [ 00:04:29.171 "bdev_malloc_delete", 00:04:29.171 "bdev_malloc_create", 00:04:29.171 "bdev_null_resize", 00:04:29.171 "bdev_null_delete", 00:04:29.171 "bdev_null_create", 00:04:29.171 "bdev_nvme_cuse_unregister", 00:04:29.171 "bdev_nvme_cuse_register", 00:04:29.171 "bdev_opal_new_user", 00:04:29.171 "bdev_opal_set_lock_state", 00:04:29.171 "bdev_opal_delete", 00:04:29.171 "bdev_opal_get_info", 00:04:29.171 "bdev_opal_create", 00:04:29.171 "bdev_nvme_opal_revert", 00:04:29.171 "bdev_nvme_opal_init", 00:04:29.171 "bdev_nvme_send_cmd", 00:04:29.171 "bdev_nvme_get_path_iostat", 00:04:29.171 "bdev_nvme_get_mdns_discovery_info", 00:04:29.171 "bdev_nvme_stop_mdns_discovery", 00:04:29.171 "bdev_nvme_start_mdns_discovery", 00:04:29.171 "bdev_nvme_set_multipath_policy", 00:04:29.171 "bdev_nvme_set_preferred_path", 00:04:29.171 "bdev_nvme_get_io_paths", 00:04:29.171 "bdev_nvme_remove_error_injection", 00:04:29.171 "bdev_nvme_add_error_injection", 00:04:29.171 "bdev_nvme_get_discovery_info", 00:04:29.171 "bdev_nvme_stop_discovery", 00:04:29.171 "bdev_nvme_start_discovery", 00:04:29.171 "bdev_nvme_get_controller_health_info", 00:04:29.171 "bdev_nvme_disable_controller", 00:04:29.171 "bdev_nvme_enable_controller", 00:04:29.171 "bdev_nvme_reset_controller", 00:04:29.171 "bdev_nvme_get_transport_statistics", 00:04:29.171 "bdev_nvme_apply_firmware", 00:04:29.171 "bdev_nvme_detach_controller", 00:04:29.171 "bdev_nvme_get_controllers", 00:04:29.171 "bdev_nvme_attach_controller", 00:04:29.171 "bdev_nvme_set_hotplug", 00:04:29.171 "bdev_nvme_set_options", 00:04:29.171 "bdev_passthru_delete", 00:04:29.171 "bdev_passthru_create", 00:04:29.171 "bdev_lvol_grow_lvstore", 00:04:29.171 "bdev_lvol_get_lvols", 00:04:29.171 "bdev_lvol_get_lvstores", 00:04:29.171 "bdev_lvol_delete", 00:04:29.171 "bdev_lvol_set_read_only", 00:04:29.171 "bdev_lvol_resize", 00:04:29.171 "bdev_lvol_decouple_parent", 00:04:29.171 "bdev_lvol_inflate", 00:04:29.171 "bdev_lvol_rename", 00:04:29.171 "bdev_lvol_clone_bdev", 00:04:29.171 "bdev_lvol_clone", 00:04:29.171 "bdev_lvol_snapshot", 00:04:29.171 "bdev_lvol_create", 00:04:29.171 "bdev_lvol_delete_lvstore", 00:04:29.171 "bdev_lvol_rename_lvstore", 00:04:29.171 "bdev_lvol_create_lvstore", 00:04:29.171 "bdev_raid_set_options", 00:04:29.171 "bdev_raid_remove_base_bdev", 00:04:29.171 "bdev_raid_add_base_bdev", 00:04:29.171 "bdev_raid_delete", 00:04:29.171 "bdev_raid_create", 00:04:29.171 "bdev_raid_get_bdevs", 00:04:29.171 "bdev_error_inject_error", 00:04:29.171 "bdev_error_delete", 00:04:29.171 "bdev_error_create", 00:04:29.172 "bdev_split_delete", 00:04:29.172 "bdev_split_create", 00:04:29.172 "bdev_delay_delete", 00:04:29.172 "bdev_delay_create", 00:04:29.172 "bdev_delay_update_latency", 00:04:29.172 "bdev_zone_block_delete", 00:04:29.172 "bdev_zone_block_create", 00:04:29.172 "blobfs_create", 00:04:29.172 "blobfs_detect", 00:04:29.172 "blobfs_set_cache_size", 00:04:29.172 "bdev_xnvme_delete", 00:04:29.172 "bdev_xnvme_create", 00:04:29.172 "bdev_aio_delete", 00:04:29.172 "bdev_aio_rescan", 00:04:29.172 "bdev_aio_create", 00:04:29.172 "bdev_ftl_set_property", 00:04:29.172 "bdev_ftl_get_properties", 00:04:29.172 "bdev_ftl_get_stats", 00:04:29.172 "bdev_ftl_unmap", 00:04:29.172 "bdev_ftl_unload", 00:04:29.172 "bdev_ftl_delete", 00:04:29.172 "bdev_ftl_load", 00:04:29.172 "bdev_ftl_create", 00:04:29.172 "bdev_virtio_attach_controller", 00:04:29.172 "bdev_virtio_scsi_get_devices", 00:04:29.172 "bdev_virtio_detach_controller", 00:04:29.172 "bdev_virtio_blk_set_hotplug", 00:04:29.172 "bdev_iscsi_delete", 00:04:29.172 "bdev_iscsi_create", 00:04:29.172 "bdev_iscsi_set_options", 00:04:29.172 "accel_error_inject_error", 00:04:29.172 "ioat_scan_accel_module", 00:04:29.172 "dsa_scan_accel_module", 00:04:29.172 "iaa_scan_accel_module", 00:04:29.172 "iscsi_set_options", 00:04:29.172 "iscsi_get_auth_groups", 00:04:29.172 "iscsi_auth_group_remove_secret", 00:04:29.172 "iscsi_auth_group_add_secret", 00:04:29.172 "iscsi_delete_auth_group", 00:04:29.172 "iscsi_create_auth_group", 00:04:29.172 "iscsi_set_discovery_auth", 00:04:29.172 "iscsi_get_options", 00:04:29.172 "iscsi_target_node_request_logout", 00:04:29.172 "iscsi_target_node_set_redirect", 00:04:29.172 "iscsi_target_node_set_auth", 00:04:29.172 "iscsi_target_node_add_lun", 00:04:29.172 "iscsi_get_connections", 00:04:29.172 "iscsi_portal_group_set_auth", 00:04:29.172 "iscsi_start_portal_group", 00:04:29.172 "iscsi_delete_portal_group", 00:04:29.172 "iscsi_create_portal_group", 00:04:29.172 "iscsi_get_portal_groups", 00:04:29.172 "iscsi_delete_target_node", 00:04:29.172 "iscsi_target_node_remove_pg_ig_maps", 00:04:29.172 "iscsi_target_node_add_pg_ig_maps", 00:04:29.172 "iscsi_create_target_node", 00:04:29.172 "iscsi_get_target_nodes", 00:04:29.172 "iscsi_delete_initiator_group", 00:04:29.172 "iscsi_initiator_group_remove_initiators", 00:04:29.172 "iscsi_initiator_group_add_initiators", 00:04:29.172 "iscsi_create_initiator_group", 00:04:29.172 "iscsi_get_initiator_groups", 00:04:29.172 "nvmf_set_crdt", 00:04:29.172 "nvmf_set_config", 00:04:29.172 "nvmf_set_max_subsystems", 00:04:29.172 "nvmf_subsystem_get_listeners", 00:04:29.172 "nvmf_subsystem_get_qpairs", 00:04:29.172 "nvmf_subsystem_get_controllers", 00:04:29.172 "nvmf_get_stats", 00:04:29.172 "nvmf_get_transports", 00:04:29.172 "nvmf_create_transport", 00:04:29.172 "nvmf_get_targets", 00:04:29.172 "nvmf_delete_target", 00:04:29.172 "nvmf_create_target", 00:04:29.172 "nvmf_subsystem_allow_any_host", 00:04:29.172 "nvmf_subsystem_remove_host", 00:04:29.172 "nvmf_subsystem_add_host", 00:04:29.172 "nvmf_subsystem_remove_ns", 00:04:29.172 "nvmf_subsystem_add_ns", 00:04:29.172 "nvmf_subsystem_listener_set_ana_state", 00:04:29.172 "nvmf_discovery_get_referrals", 00:04:29.172 "nvmf_discovery_remove_referral", 00:04:29.172 "nvmf_discovery_add_referral", 00:04:29.172 "nvmf_subsystem_remove_listener", 00:04:29.172 "nvmf_subsystem_add_listener", 00:04:29.172 "nvmf_delete_subsystem", 00:04:29.172 "nvmf_create_subsystem", 00:04:29.172 "nvmf_get_subsystems", 00:04:29.172 "env_dpdk_get_mem_stats", 00:04:29.172 "nbd_get_disks", 00:04:29.172 "nbd_stop_disk", 00:04:29.172 "nbd_start_disk", 00:04:29.172 "ublk_recover_disk", 00:04:29.172 "ublk_get_disks", 00:04:29.172 "ublk_stop_disk", 00:04:29.172 "ublk_start_disk", 00:04:29.172 "ublk_destroy_target", 00:04:29.172 "ublk_create_target", 00:04:29.172 "virtio_blk_create_transport", 00:04:29.172 "virtio_blk_get_transports", 00:04:29.172 "vhost_controller_set_coalescing", 00:04:29.172 "vhost_get_controllers", 00:04:29.172 "vhost_delete_controller", 00:04:29.172 "vhost_create_blk_controller", 00:04:29.172 "vhost_scsi_controller_remove_target", 00:04:29.172 "vhost_scsi_controller_add_target", 00:04:29.172 "vhost_start_scsi_controller", 00:04:29.172 "vhost_create_scsi_controller", 00:04:29.172 "thread_set_cpumask", 00:04:29.172 "framework_get_scheduler", 00:04:29.172 "framework_set_scheduler", 00:04:29.172 "framework_get_reactors", 00:04:29.172 "thread_get_io_channels", 00:04:29.172 "thread_get_pollers", 00:04:29.172 "thread_get_stats", 00:04:29.172 "framework_monitor_context_switch", 00:04:29.172 "spdk_kill_instance", 00:04:29.172 "log_enable_timestamps", 00:04:29.172 "log_get_flags", 00:04:29.172 "log_clear_flag", 00:04:29.172 "log_set_flag", 00:04:29.172 "log_get_level", 00:04:29.172 "log_set_level", 00:04:29.172 "log_get_print_level", 00:04:29.172 "log_set_print_level", 00:04:29.172 "framework_enable_cpumask_locks", 00:04:29.172 "framework_disable_cpumask_locks", 00:04:29.172 "framework_wait_init", 00:04:29.172 "framework_start_init", 00:04:29.172 "scsi_get_devices", 00:04:29.172 "bdev_get_histogram", 00:04:29.172 "bdev_enable_histogram", 00:04:29.172 "bdev_set_qos_limit", 00:04:29.172 "bdev_set_qd_sampling_period", 00:04:29.172 "bdev_get_bdevs", 00:04:29.172 "bdev_reset_iostat", 00:04:29.172 "bdev_get_iostat", 00:04:29.172 "bdev_examine", 00:04:29.172 "bdev_wait_for_examine", 00:04:29.172 "bdev_set_options", 00:04:29.172 "notify_get_notifications", 00:04:29.172 "notify_get_types", 00:04:29.172 "accel_get_stats", 00:04:29.172 "accel_set_options", 00:04:29.172 "accel_set_driver", 00:04:29.172 "accel_crypto_key_destroy", 00:04:29.172 "accel_crypto_keys_get", 00:04:29.172 "accel_crypto_key_create", 00:04:29.172 "accel_assign_opc", 00:04:29.172 "accel_get_module_info", 00:04:29.172 "accel_get_opc_assignments", 00:04:29.172 "vmd_rescan", 00:04:29.172 "vmd_remove_device", 00:04:29.172 "vmd_enable", 00:04:29.172 "sock_set_default_impl", 00:04:29.172 "sock_impl_set_options", 00:04:29.172 "sock_impl_get_options", 00:04:29.172 "iobuf_get_stats", 00:04:29.172 "iobuf_set_options", 00:04:29.172 "framework_get_pci_devices", 00:04:29.172 "framework_get_config", 00:04:29.172 "framework_get_subsystems", 00:04:29.172 "trace_get_info", 00:04:29.172 "trace_get_tpoint_group_mask", 00:04:29.172 "trace_disable_tpoint_group", 00:04:29.172 "trace_enable_tpoint_group", 00:04:29.172 "trace_clear_tpoint_mask", 00:04:29.172 "trace_set_tpoint_mask", 00:04:29.172 "spdk_get_version", 00:04:29.172 "rpc_get_methods" 00:04:29.172 ] 00:04:29.172 14:05:27 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:29.172 14:05:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:29.172 14:05:27 -- common/autotest_common.sh@10 -- # set +x 00:04:29.172 14:05:27 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:29.172 14:05:27 -- spdkcli/tcp.sh@38 -- # killprocess 56681 00:04:29.172 14:05:27 -- common/autotest_common.sh@936 -- # '[' -z 56681 ']' 00:04:29.172 14:05:27 -- common/autotest_common.sh@940 -- # kill -0 56681 00:04:29.172 14:05:27 -- common/autotest_common.sh@941 -- # uname 00:04:29.172 14:05:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:29.172 14:05:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56681 00:04:29.172 14:05:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:29.172 14:05:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:29.172 killing process with pid 56681 00:04:29.172 14:05:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56681' 00:04:29.172 14:05:27 -- common/autotest_common.sh@955 -- # kill 56681 00:04:29.172 14:05:27 -- common/autotest_common.sh@960 -- # wait 56681 00:04:30.551 00:04:30.551 real 0m2.436s 00:04:30.551 user 0m4.240s 00:04:30.551 sys 0m0.385s 00:04:30.551 14:05:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:30.551 14:05:28 -- common/autotest_common.sh@10 -- # set +x 00:04:30.551 ************************************ 00:04:30.551 END TEST spdkcli_tcp 00:04:30.551 ************************************ 00:04:30.551 14:05:28 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:30.551 14:05:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:30.551 14:05:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:30.551 14:05:28 -- common/autotest_common.sh@10 -- # set +x 00:04:30.551 ************************************ 00:04:30.551 START TEST dpdk_mem_utility 00:04:30.551 ************************************ 00:04:30.551 14:05:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:30.551 * Looking for test storage... 00:04:30.551 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:30.551 14:05:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:30.551 14:05:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:30.551 14:05:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:30.551 14:05:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:30.551 14:05:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:30.551 14:05:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:30.551 14:05:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:30.551 14:05:28 -- scripts/common.sh@335 -- # IFS=.-: 00:04:30.551 14:05:28 -- scripts/common.sh@335 -- # read -ra ver1 00:04:30.551 14:05:28 -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.551 14:05:28 -- scripts/common.sh@336 -- # read -ra ver2 00:04:30.551 14:05:28 -- scripts/common.sh@337 -- # local 'op=<' 00:04:30.551 14:05:28 -- scripts/common.sh@339 -- # ver1_l=2 00:04:30.551 14:05:28 -- scripts/common.sh@340 -- # ver2_l=1 00:04:30.551 14:05:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:30.551 14:05:28 -- scripts/common.sh@343 -- # case "$op" in 00:04:30.551 14:05:28 -- scripts/common.sh@344 -- # : 1 00:04:30.551 14:05:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:30.551 14:05:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.551 14:05:28 -- scripts/common.sh@364 -- # decimal 1 00:04:30.551 14:05:28 -- scripts/common.sh@352 -- # local d=1 00:04:30.551 14:05:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.551 14:05:28 -- scripts/common.sh@354 -- # echo 1 00:04:30.551 14:05:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:30.551 14:05:28 -- scripts/common.sh@365 -- # decimal 2 00:04:30.551 14:05:28 -- scripts/common.sh@352 -- # local d=2 00:04:30.551 14:05:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.551 14:05:28 -- scripts/common.sh@354 -- # echo 2 00:04:30.551 14:05:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:30.551 14:05:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:30.551 14:05:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:30.551 14:05:28 -- scripts/common.sh@367 -- # return 0 00:04:30.551 14:05:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.551 14:05:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:30.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.551 --rc genhtml_branch_coverage=1 00:04:30.551 --rc genhtml_function_coverage=1 00:04:30.551 --rc genhtml_legend=1 00:04:30.551 --rc geninfo_all_blocks=1 00:04:30.551 --rc geninfo_unexecuted_blocks=1 00:04:30.551 00:04:30.551 ' 00:04:30.551 14:05:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:30.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.551 --rc genhtml_branch_coverage=1 00:04:30.551 --rc genhtml_function_coverage=1 00:04:30.551 --rc genhtml_legend=1 00:04:30.551 --rc geninfo_all_blocks=1 00:04:30.551 --rc geninfo_unexecuted_blocks=1 00:04:30.551 00:04:30.551 ' 00:04:30.551 14:05:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:30.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.551 --rc genhtml_branch_coverage=1 00:04:30.551 --rc genhtml_function_coverage=1 00:04:30.551 --rc genhtml_legend=1 00:04:30.551 --rc geninfo_all_blocks=1 00:04:30.551 --rc geninfo_unexecuted_blocks=1 00:04:30.551 00:04:30.551 ' 00:04:30.551 14:05:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:30.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.551 --rc genhtml_branch_coverage=1 00:04:30.551 --rc genhtml_function_coverage=1 00:04:30.551 --rc genhtml_legend=1 00:04:30.551 --rc geninfo_all_blocks=1 00:04:30.551 --rc geninfo_unexecuted_blocks=1 00:04:30.551 00:04:30.551 ' 00:04:30.551 14:05:28 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:30.551 14:05:28 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=56780 00:04:30.551 14:05:28 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 56780 00:04:30.551 14:05:28 -- common/autotest_common.sh@829 -- # '[' -z 56780 ']' 00:04:30.551 14:05:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.551 14:05:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:30.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.551 14:05:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.551 14:05:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:30.551 14:05:28 -- common/autotest_common.sh@10 -- # set +x 00:04:30.551 14:05:28 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:30.551 [2024-11-19 14:05:29.037828] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:30.551 [2024-11-19 14:05:29.037946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56780 ] 00:04:30.821 [2024-11-19 14:05:29.178459] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.821 [2024-11-19 14:05:29.357854] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:30.822 [2024-11-19 14:05:29.358083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.200 14:05:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:32.200 14:05:30 -- common/autotest_common.sh@862 -- # return 0 00:04:32.200 14:05:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:32.200 14:05:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:32.200 14:05:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:32.200 14:05:30 -- common/autotest_common.sh@10 -- # set +x 00:04:32.200 { 00:04:32.200 "filename": "/tmp/spdk_mem_dump.txt" 00:04:32.200 } 00:04:32.200 14:05:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:32.200 14:05:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:32.200 DPDK memory size 820.000000 MiB in 1 heap(s) 00:04:32.200 1 heaps totaling size 820.000000 MiB 00:04:32.200 size: 820.000000 MiB heap id: 0 00:04:32.200 end heaps---------- 00:04:32.200 8 mempools totaling size 598.116089 MiB 00:04:32.200 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:32.200 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:32.200 size: 84.521057 MiB name: bdev_io_56780 00:04:32.200 size: 51.011292 MiB name: evtpool_56780 00:04:32.200 size: 50.003479 MiB name: msgpool_56780 00:04:32.200 size: 21.763794 MiB name: PDU_Pool 00:04:32.200 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:32.200 size: 0.026123 MiB name: Session_Pool 00:04:32.200 end mempools------- 00:04:32.200 6 memzones totaling size 4.142822 MiB 00:04:32.200 size: 1.000366 MiB name: RG_ring_0_56780 00:04:32.200 size: 1.000366 MiB name: RG_ring_1_56780 00:04:32.200 size: 1.000366 MiB name: RG_ring_4_56780 00:04:32.200 size: 1.000366 MiB name: RG_ring_5_56780 00:04:32.200 size: 0.125366 MiB name: RG_ring_2_56780 00:04:32.200 size: 0.015991 MiB name: RG_ring_3_56780 00:04:32.200 end memzones------- 00:04:32.200 14:05:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:32.200 heap id: 0 total size: 820.000000 MiB number of busy elements: 301 number of free elements: 18 00:04:32.200 list of free elements. size: 18.451294 MiB 00:04:32.200 element at address: 0x200000400000 with size: 1.999451 MiB 00:04:32.200 element at address: 0x200000800000 with size: 1.996887 MiB 00:04:32.200 element at address: 0x200007000000 with size: 1.995972 MiB 00:04:32.200 element at address: 0x20000b200000 with size: 1.995972 MiB 00:04:32.200 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:32.200 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:32.200 element at address: 0x200019600000 with size: 0.999084 MiB 00:04:32.200 element at address: 0x200003e00000 with size: 0.996094 MiB 00:04:32.200 element at address: 0x200032200000 with size: 0.994324 MiB 00:04:32.200 element at address: 0x200018e00000 with size: 0.959656 MiB 00:04:32.200 element at address: 0x200019900040 with size: 0.936401 MiB 00:04:32.200 element at address: 0x200000200000 with size: 0.829224 MiB 00:04:32.200 element at address: 0x20001b000000 with size: 0.563416 MiB 00:04:32.200 element at address: 0x200019200000 with size: 0.487976 MiB 00:04:32.200 element at address: 0x200019a00000 with size: 0.485413 MiB 00:04:32.200 element at address: 0x200013800000 with size: 0.469116 MiB 00:04:32.200 element at address: 0x200028400000 with size: 0.390442 MiB 00:04:32.200 element at address: 0x200003a00000 with size: 0.351990 MiB 00:04:32.200 list of standard malloc elements. size: 199.284302 MiB 00:04:32.200 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:04:32.200 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:04:32.200 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:32.200 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:32.200 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:32.200 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:32.200 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:04:32.200 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:32.200 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:04:32.200 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:04:32.200 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:04:32.200 element at address: 0x2000002d4480 with size: 0.000244 MiB 00:04:32.200 element at address: 0x2000002d4580 with size: 0.000244 MiB 00:04:32.200 element at address: 0x2000002d4680 with size: 0.000244 MiB 00:04:32.200 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:04:32.200 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:04:32.200 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:04:32.200 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:04:32.200 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:04:32.200 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:04:32.200 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:04:32.200 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:04:32.200 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:04:32.200 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:04:32.200 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:04:32.200 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:04:32.200 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:04:32.200 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:04:32.200 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:04:32.200 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:04:32.200 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:04:32.200 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:04:32.200 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:04:32.200 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:04:32.200 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:04:32.200 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:32.201 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x200003aff980 with size: 0.000244 MiB 00:04:32.201 element at address: 0x200003affa80 with size: 0.000244 MiB 00:04:32.201 element at address: 0x200003eff000 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:04:32.201 element at address: 0x200013878180 with size: 0.000244 MiB 00:04:32.201 element at address: 0x200013878280 with size: 0.000244 MiB 00:04:32.201 element at address: 0x200013878380 with size: 0.000244 MiB 00:04:32.201 element at address: 0x200013878480 with size: 0.000244 MiB 00:04:32.201 element at address: 0x200013878580 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x200019abc680 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b0903c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b0904c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b0905c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:04:32.201 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:04:32.202 element at address: 0x200028463f40 with size: 0.000244 MiB 00:04:32.202 element at address: 0x200028464040 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846af80 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846b080 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846b180 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846b280 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846b380 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846b480 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846b580 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846b680 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846b780 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846b880 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846b980 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846be80 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846c080 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846c180 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846c280 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846c380 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846c480 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846c580 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846c680 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846c780 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846c880 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846c980 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846d080 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846d180 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846d280 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846d380 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846d480 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846d580 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846d680 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846d780 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846d880 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846d980 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846da80 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846db80 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846de80 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846df80 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846e080 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846e180 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846e280 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846e380 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846e480 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846e580 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846e680 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846e780 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846e880 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846e980 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846f080 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846f180 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846f280 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846f380 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846f480 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846f580 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846f680 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846f780 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846f880 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846f980 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:04:32.202 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:04:32.202 list of memzone associated elements. size: 602.264404 MiB 00:04:32.202 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:04:32.202 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:32.202 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:04:32.202 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:32.202 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:04:32.202 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_56780_0 00:04:32.202 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:04:32.202 associated memzone info: size: 48.002930 MiB name: MP_evtpool_56780_0 00:04:32.202 element at address: 0x200003fff340 with size: 48.003113 MiB 00:04:32.202 associated memzone info: size: 48.002930 MiB name: MP_msgpool_56780_0 00:04:32.202 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:04:32.202 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:32.202 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:04:32.202 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:32.202 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:04:32.202 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_56780 00:04:32.202 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:04:32.202 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_56780 00:04:32.202 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:32.202 associated memzone info: size: 1.007996 MiB name: MP_evtpool_56780 00:04:32.202 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:32.202 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:32.202 element at address: 0x200019abc780 with size: 1.008179 MiB 00:04:32.202 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:32.202 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:32.202 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:32.202 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:04:32.202 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:32.202 element at address: 0x200003eff100 with size: 1.000549 MiB 00:04:32.202 associated memzone info: size: 1.000366 MiB name: RG_ring_0_56780 00:04:32.202 element at address: 0x200003affb80 with size: 1.000549 MiB 00:04:32.202 associated memzone info: size: 1.000366 MiB name: RG_ring_1_56780 00:04:32.202 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:04:32.202 associated memzone info: size: 1.000366 MiB name: RG_ring_4_56780 00:04:32.202 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:04:32.202 associated memzone info: size: 1.000366 MiB name: RG_ring_5_56780 00:04:32.202 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:04:32.202 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_56780 00:04:32.202 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:04:32.203 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:32.203 element at address: 0x200013878680 with size: 0.500549 MiB 00:04:32.203 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:32.203 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:04:32.203 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:32.203 element at address: 0x200003adf740 with size: 0.125549 MiB 00:04:32.203 associated memzone info: size: 0.125366 MiB name: RG_ring_2_56780 00:04:32.203 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:04:32.203 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:32.203 element at address: 0x200028464140 with size: 0.023804 MiB 00:04:32.203 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:32.203 element at address: 0x200003adb500 with size: 0.016174 MiB 00:04:32.203 associated memzone info: size: 0.015991 MiB name: RG_ring_3_56780 00:04:32.203 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:04:32.203 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:32.203 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:04:32.203 associated memzone info: size: 0.000183 MiB name: MP_msgpool_56780 00:04:32.203 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:04:32.203 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_56780 00:04:32.203 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:04:32.203 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:32.203 14:05:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:32.203 14:05:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 56780 00:04:32.203 14:05:30 -- common/autotest_common.sh@936 -- # '[' -z 56780 ']' 00:04:32.203 14:05:30 -- common/autotest_common.sh@940 -- # kill -0 56780 00:04:32.203 14:05:30 -- common/autotest_common.sh@941 -- # uname 00:04:32.203 14:05:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:32.203 14:05:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56780 00:04:32.203 14:05:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:32.203 killing process with pid 56780 00:04:32.203 14:05:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:32.203 14:05:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56780' 00:04:32.203 14:05:30 -- common/autotest_common.sh@955 -- # kill 56780 00:04:32.203 14:05:30 -- common/autotest_common.sh@960 -- # wait 56780 00:04:33.577 00:04:33.577 real 0m2.990s 00:04:33.577 user 0m3.161s 00:04:33.578 sys 0m0.383s 00:04:33.578 14:05:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:33.578 ************************************ 00:04:33.578 END TEST dpdk_mem_utility 00:04:33.578 ************************************ 00:04:33.578 14:05:31 -- common/autotest_common.sh@10 -- # set +x 00:04:33.578 14:05:31 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:33.578 14:05:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:33.578 14:05:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:33.578 14:05:31 -- common/autotest_common.sh@10 -- # set +x 00:04:33.578 ************************************ 00:04:33.578 START TEST event 00:04:33.578 ************************************ 00:04:33.578 14:05:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:33.578 * Looking for test storage... 00:04:33.578 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:33.578 14:05:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:33.578 14:05:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:33.578 14:05:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:33.578 14:05:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:33.578 14:05:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:33.578 14:05:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:33.578 14:05:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:33.578 14:05:31 -- scripts/common.sh@335 -- # IFS=.-: 00:04:33.578 14:05:31 -- scripts/common.sh@335 -- # read -ra ver1 00:04:33.578 14:05:31 -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.578 14:05:31 -- scripts/common.sh@336 -- # read -ra ver2 00:04:33.578 14:05:31 -- scripts/common.sh@337 -- # local 'op=<' 00:04:33.578 14:05:31 -- scripts/common.sh@339 -- # ver1_l=2 00:04:33.578 14:05:31 -- scripts/common.sh@340 -- # ver2_l=1 00:04:33.578 14:05:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:33.578 14:05:31 -- scripts/common.sh@343 -- # case "$op" in 00:04:33.578 14:05:31 -- scripts/common.sh@344 -- # : 1 00:04:33.578 14:05:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:33.578 14:05:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.578 14:05:31 -- scripts/common.sh@364 -- # decimal 1 00:04:33.578 14:05:31 -- scripts/common.sh@352 -- # local d=1 00:04:33.578 14:05:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.578 14:05:31 -- scripts/common.sh@354 -- # echo 1 00:04:33.578 14:05:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:33.578 14:05:31 -- scripts/common.sh@365 -- # decimal 2 00:04:33.578 14:05:31 -- scripts/common.sh@352 -- # local d=2 00:04:33.578 14:05:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.578 14:05:31 -- scripts/common.sh@354 -- # echo 2 00:04:33.578 14:05:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:33.578 14:05:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:33.578 14:05:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:33.578 14:05:32 -- scripts/common.sh@367 -- # return 0 00:04:33.578 14:05:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.578 14:05:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:33.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.578 --rc genhtml_branch_coverage=1 00:04:33.578 --rc genhtml_function_coverage=1 00:04:33.578 --rc genhtml_legend=1 00:04:33.578 --rc geninfo_all_blocks=1 00:04:33.578 --rc geninfo_unexecuted_blocks=1 00:04:33.578 00:04:33.578 ' 00:04:33.578 14:05:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:33.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.578 --rc genhtml_branch_coverage=1 00:04:33.578 --rc genhtml_function_coverage=1 00:04:33.578 --rc genhtml_legend=1 00:04:33.578 --rc geninfo_all_blocks=1 00:04:33.578 --rc geninfo_unexecuted_blocks=1 00:04:33.578 00:04:33.578 ' 00:04:33.578 14:05:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:33.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.578 --rc genhtml_branch_coverage=1 00:04:33.578 --rc genhtml_function_coverage=1 00:04:33.578 --rc genhtml_legend=1 00:04:33.578 --rc geninfo_all_blocks=1 00:04:33.578 --rc geninfo_unexecuted_blocks=1 00:04:33.578 00:04:33.578 ' 00:04:33.578 14:05:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:33.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.578 --rc genhtml_branch_coverage=1 00:04:33.578 --rc genhtml_function_coverage=1 00:04:33.578 --rc genhtml_legend=1 00:04:33.578 --rc geninfo_all_blocks=1 00:04:33.578 --rc geninfo_unexecuted_blocks=1 00:04:33.578 00:04:33.578 ' 00:04:33.578 14:05:32 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:33.578 14:05:32 -- bdev/nbd_common.sh@6 -- # set -e 00:04:33.578 14:05:32 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:33.578 14:05:32 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:04:33.578 14:05:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:33.578 14:05:32 -- common/autotest_common.sh@10 -- # set +x 00:04:33.578 ************************************ 00:04:33.578 START TEST event_perf 00:04:33.578 ************************************ 00:04:33.578 14:05:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:33.578 Running I/O for 1 seconds...[2024-11-19 14:05:32.037436] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:33.578 [2024-11-19 14:05:32.037521] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56878 ] 00:04:33.836 [2024-11-19 14:05:32.179428] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:33.836 [2024-11-19 14:05:32.325666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:33.836 [2024-11-19 14:05:32.325857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:33.836 [2024-11-19 14:05:32.326022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.836 Running I/O for 1 seconds...[2024-11-19 14:05:32.326047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:35.210 00:04:35.210 lcore 0: 206734 00:04:35.210 lcore 1: 206736 00:04:35.210 lcore 2: 206733 00:04:35.210 lcore 3: 206736 00:04:35.210 done. 00:04:35.210 00:04:35.210 real 0m1.532s 00:04:35.210 user 0m4.330s 00:04:35.210 sys 0m0.085s 00:04:35.210 14:05:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:35.210 ************************************ 00:04:35.210 END TEST event_perf 00:04:35.210 14:05:33 -- common/autotest_common.sh@10 -- # set +x 00:04:35.210 ************************************ 00:04:35.210 14:05:33 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:35.210 14:05:33 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:35.210 14:05:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:35.210 14:05:33 -- common/autotest_common.sh@10 -- # set +x 00:04:35.210 ************************************ 00:04:35.210 START TEST event_reactor 00:04:35.210 ************************************ 00:04:35.210 14:05:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:35.210 [2024-11-19 14:05:33.608444] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:35.210 [2024-11-19 14:05:33.608647] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56923 ] 00:04:35.210 [2024-11-19 14:05:33.757099] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.497 [2024-11-19 14:05:33.897363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.873 test_start 00:04:36.873 oneshot 00:04:36.873 tick 100 00:04:36.873 tick 100 00:04:36.873 tick 250 00:04:36.873 tick 100 00:04:36.873 tick 100 00:04:36.873 tick 250 00:04:36.873 tick 500 00:04:36.873 tick 100 00:04:36.873 tick 100 00:04:36.873 tick 100 00:04:36.873 tick 250 00:04:36.873 tick 100 00:04:36.873 tick 100 00:04:36.873 test_end 00:04:36.873 00:04:36.873 real 0m1.524s 00:04:36.873 user 0m1.338s 00:04:36.873 sys 0m0.077s 00:04:36.873 14:05:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:36.873 ************************************ 00:04:36.873 END TEST event_reactor 00:04:36.873 14:05:35 -- common/autotest_common.sh@10 -- # set +x 00:04:36.873 ************************************ 00:04:36.873 14:05:35 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:36.873 14:05:35 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:36.873 14:05:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:36.873 14:05:35 -- common/autotest_common.sh@10 -- # set +x 00:04:36.873 ************************************ 00:04:36.873 START TEST event_reactor_perf 00:04:36.873 ************************************ 00:04:36.873 14:05:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:36.873 [2024-11-19 14:05:35.172727] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:36.873 [2024-11-19 14:05:35.172976] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56954 ] 00:04:36.873 [2024-11-19 14:05:35.313829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.131 [2024-11-19 14:05:35.454450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.509 test_start 00:04:38.509 test_end 00:04:38.509 Performance: 410827 events per second 00:04:38.509 ************************************ 00:04:38.509 END TEST event_reactor_perf 00:04:38.509 ************************************ 00:04:38.509 00:04:38.509 real 0m1.516s 00:04:38.509 user 0m1.338s 00:04:38.509 sys 0m0.069s 00:04:38.509 14:05:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:38.509 14:05:36 -- common/autotest_common.sh@10 -- # set +x 00:04:38.509 14:05:36 -- event/event.sh@49 -- # uname -s 00:04:38.509 14:05:36 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:38.509 14:05:36 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:38.509 14:05:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:38.509 14:05:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:38.509 14:05:36 -- common/autotest_common.sh@10 -- # set +x 00:04:38.509 ************************************ 00:04:38.509 START TEST event_scheduler 00:04:38.509 ************************************ 00:04:38.509 14:05:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:38.509 * Looking for test storage... 00:04:38.509 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:38.509 14:05:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:38.509 14:05:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:38.509 14:05:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:38.509 14:05:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:38.509 14:05:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:38.509 14:05:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:38.509 14:05:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:38.509 14:05:36 -- scripts/common.sh@335 -- # IFS=.-: 00:04:38.509 14:05:36 -- scripts/common.sh@335 -- # read -ra ver1 00:04:38.509 14:05:36 -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.509 14:05:36 -- scripts/common.sh@336 -- # read -ra ver2 00:04:38.509 14:05:36 -- scripts/common.sh@337 -- # local 'op=<' 00:04:38.509 14:05:36 -- scripts/common.sh@339 -- # ver1_l=2 00:04:38.509 14:05:36 -- scripts/common.sh@340 -- # ver2_l=1 00:04:38.509 14:05:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:38.509 14:05:36 -- scripts/common.sh@343 -- # case "$op" in 00:04:38.509 14:05:36 -- scripts/common.sh@344 -- # : 1 00:04:38.509 14:05:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:38.509 14:05:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.509 14:05:36 -- scripts/common.sh@364 -- # decimal 1 00:04:38.509 14:05:36 -- scripts/common.sh@352 -- # local d=1 00:04:38.509 14:05:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.509 14:05:36 -- scripts/common.sh@354 -- # echo 1 00:04:38.509 14:05:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:38.509 14:05:36 -- scripts/common.sh@365 -- # decimal 2 00:04:38.509 14:05:36 -- scripts/common.sh@352 -- # local d=2 00:04:38.509 14:05:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.509 14:05:36 -- scripts/common.sh@354 -- # echo 2 00:04:38.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.509 14:05:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:38.509 14:05:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:38.509 14:05:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:38.509 14:05:36 -- scripts/common.sh@367 -- # return 0 00:04:38.509 14:05:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.509 14:05:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:38.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.509 --rc genhtml_branch_coverage=1 00:04:38.509 --rc genhtml_function_coverage=1 00:04:38.509 --rc genhtml_legend=1 00:04:38.509 --rc geninfo_all_blocks=1 00:04:38.509 --rc geninfo_unexecuted_blocks=1 00:04:38.509 00:04:38.509 ' 00:04:38.509 14:05:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:38.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.509 --rc genhtml_branch_coverage=1 00:04:38.509 --rc genhtml_function_coverage=1 00:04:38.509 --rc genhtml_legend=1 00:04:38.509 --rc geninfo_all_blocks=1 00:04:38.509 --rc geninfo_unexecuted_blocks=1 00:04:38.509 00:04:38.509 ' 00:04:38.509 14:05:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:38.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.509 --rc genhtml_branch_coverage=1 00:04:38.509 --rc genhtml_function_coverage=1 00:04:38.509 --rc genhtml_legend=1 00:04:38.509 --rc geninfo_all_blocks=1 00:04:38.509 --rc geninfo_unexecuted_blocks=1 00:04:38.509 00:04:38.509 ' 00:04:38.509 14:05:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:38.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.509 --rc genhtml_branch_coverage=1 00:04:38.509 --rc genhtml_function_coverage=1 00:04:38.509 --rc genhtml_legend=1 00:04:38.509 --rc geninfo_all_blocks=1 00:04:38.509 --rc geninfo_unexecuted_blocks=1 00:04:38.509 00:04:38.509 ' 00:04:38.509 14:05:36 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:38.509 14:05:36 -- scheduler/scheduler.sh@35 -- # scheduler_pid=57029 00:04:38.509 14:05:36 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.509 14:05:36 -- scheduler/scheduler.sh@37 -- # waitforlisten 57029 00:04:38.509 14:05:36 -- common/autotest_common.sh@829 -- # '[' -z 57029 ']' 00:04:38.509 14:05:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.509 14:05:36 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:38.509 14:05:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:38.509 14:05:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.510 14:05:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:38.510 14:05:36 -- common/autotest_common.sh@10 -- # set +x 00:04:38.510 [2024-11-19 14:05:36.896329] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:38.510 [2024-11-19 14:05:36.896862] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57029 ] 00:04:38.510 [2024-11-19 14:05:37.044949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:38.768 [2024-11-19 14:05:37.229686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.768 [2024-11-19 14:05:37.229954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.768 [2024-11-19 14:05:37.230141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:38.768 [2024-11-19 14:05:37.230309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:39.339 14:05:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:39.339 14:05:37 -- common/autotest_common.sh@862 -- # return 0 00:04:39.339 14:05:37 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:39.339 14:05:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.339 14:05:37 -- common/autotest_common.sh@10 -- # set +x 00:04:39.339 POWER: Env isn't set yet! 00:04:39.339 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:39.339 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:39.339 POWER: Cannot set governor of lcore 0 to userspace 00:04:39.339 POWER: Attempting to initialise PSTAT power management... 00:04:39.339 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:39.339 POWER: Cannot set governor of lcore 0 to performance 00:04:39.339 POWER: Attempting to initialise AMD PSTATE power management... 00:04:39.339 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:39.339 POWER: Cannot set governor of lcore 0 to userspace 00:04:39.339 POWER: Attempting to initialise CPPC power management... 00:04:39.339 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:39.339 POWER: Cannot set governor of lcore 0 to userspace 00:04:39.339 POWER: Attempting to initialise VM power management... 00:04:39.339 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:39.339 POWER: Unable to set Power Management Environment for lcore 0 00:04:39.339 [2024-11-19 14:05:37.724405] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:04:39.339 [2024-11-19 14:05:37.724422] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:04:39.339 [2024-11-19 14:05:37.724432] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:04:39.339 [2024-11-19 14:05:37.724447] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:39.339 [2024-11-19 14:05:37.724458] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:39.339 [2024-11-19 14:05:37.724465] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:39.339 14:05:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.339 14:05:37 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:39.339 14:05:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.339 14:05:37 -- common/autotest_common.sh@10 -- # set +x 00:04:39.600 [2024-11-19 14:05:37.944956] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:39.600 14:05:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.600 14:05:37 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:39.600 14:05:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:39.600 14:05:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:39.600 14:05:37 -- common/autotest_common.sh@10 -- # set +x 00:04:39.600 ************************************ 00:04:39.600 START TEST scheduler_create_thread 00:04:39.600 ************************************ 00:04:39.600 14:05:37 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:04:39.600 14:05:37 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:39.600 14:05:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.600 14:05:37 -- common/autotest_common.sh@10 -- # set +x 00:04:39.600 2 00:04:39.600 14:05:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.600 14:05:37 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:39.600 14:05:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.600 14:05:37 -- common/autotest_common.sh@10 -- # set +x 00:04:39.600 3 00:04:39.600 14:05:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.600 14:05:37 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:39.600 14:05:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.600 14:05:37 -- common/autotest_common.sh@10 -- # set +x 00:04:39.600 4 00:04:39.600 14:05:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.600 14:05:37 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:39.600 14:05:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.601 14:05:37 -- common/autotest_common.sh@10 -- # set +x 00:04:39.601 5 00:04:39.601 14:05:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.601 14:05:37 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:39.601 14:05:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.601 14:05:37 -- common/autotest_common.sh@10 -- # set +x 00:04:39.601 6 00:04:39.601 14:05:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.601 14:05:37 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:39.601 14:05:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.601 14:05:37 -- common/autotest_common.sh@10 -- # set +x 00:04:39.601 7 00:04:39.601 14:05:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.601 14:05:38 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:39.601 14:05:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.601 14:05:38 -- common/autotest_common.sh@10 -- # set +x 00:04:39.601 8 00:04:39.601 14:05:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.601 14:05:38 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:39.601 14:05:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.601 14:05:38 -- common/autotest_common.sh@10 -- # set +x 00:04:39.601 9 00:04:39.601 14:05:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.601 14:05:38 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:39.601 14:05:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.601 14:05:38 -- common/autotest_common.sh@10 -- # set +x 00:04:39.601 10 00:04:39.601 14:05:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.601 14:05:38 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:39.601 14:05:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.601 14:05:38 -- common/autotest_common.sh@10 -- # set +x 00:04:39.601 14:05:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.601 14:05:38 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:39.601 14:05:38 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:39.601 14:05:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.601 14:05:38 -- common/autotest_common.sh@10 -- # set +x 00:04:39.601 14:05:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.601 14:05:38 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:39.601 14:05:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.601 14:05:38 -- common/autotest_common.sh@10 -- # set +x 00:04:39.601 14:05:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.601 14:05:38 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:39.601 14:05:38 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:39.601 14:05:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.601 14:05:38 -- common/autotest_common.sh@10 -- # set +x 00:04:40.979 ************************************ 00:04:40.979 END TEST scheduler_create_thread 00:04:40.979 ************************************ 00:04:40.979 14:05:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.979 00:04:40.979 real 0m1.173s 00:04:40.979 user 0m0.015s 00:04:40.979 sys 0m0.003s 00:04:40.979 14:05:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:40.979 14:05:39 -- common/autotest_common.sh@10 -- # set +x 00:04:40.979 14:05:39 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:40.979 14:05:39 -- scheduler/scheduler.sh@46 -- # killprocess 57029 00:04:40.979 14:05:39 -- common/autotest_common.sh@936 -- # '[' -z 57029 ']' 00:04:40.979 14:05:39 -- common/autotest_common.sh@940 -- # kill -0 57029 00:04:40.979 14:05:39 -- common/autotest_common.sh@941 -- # uname 00:04:40.979 14:05:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:40.979 14:05:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57029 00:04:40.979 killing process with pid 57029 00:04:40.979 14:05:39 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:04:40.979 14:05:39 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:04:40.979 14:05:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57029' 00:04:40.979 14:05:39 -- common/autotest_common.sh@955 -- # kill 57029 00:04:40.980 14:05:39 -- common/autotest_common.sh@960 -- # wait 57029 00:04:41.239 [2024-11-19 14:05:39.606160] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:41.811 00:04:41.811 real 0m3.532s 00:04:41.811 user 0m5.474s 00:04:41.811 sys 0m0.323s 00:04:41.811 14:05:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:41.811 14:05:40 -- common/autotest_common.sh@10 -- # set +x 00:04:41.811 ************************************ 00:04:41.811 END TEST event_scheduler 00:04:41.811 ************************************ 00:04:41.811 14:05:40 -- event/event.sh@51 -- # modprobe -n nbd 00:04:41.811 14:05:40 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:41.811 14:05:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:41.811 14:05:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.811 14:05:40 -- common/autotest_common.sh@10 -- # set +x 00:04:41.811 ************************************ 00:04:41.811 START TEST app_repeat 00:04:41.811 ************************************ 00:04:41.811 14:05:40 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:04:41.811 14:05:40 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.811 14:05:40 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.811 14:05:40 -- event/event.sh@13 -- # local nbd_list 00:04:41.811 14:05:40 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:41.811 14:05:40 -- event/event.sh@14 -- # local bdev_list 00:04:41.811 14:05:40 -- event/event.sh@15 -- # local repeat_times=4 00:04:41.811 14:05:40 -- event/event.sh@17 -- # modprobe nbd 00:04:41.811 Process app_repeat pid: 57113 00:04:41.811 spdk_app_start Round 0 00:04:41.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:41.811 14:05:40 -- event/event.sh@19 -- # repeat_pid=57113 00:04:41.811 14:05:40 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.811 14:05:40 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 57113' 00:04:41.811 14:05:40 -- event/event.sh@23 -- # for i in {0..2} 00:04:41.811 14:05:40 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:41.811 14:05:40 -- event/event.sh@25 -- # waitforlisten 57113 /var/tmp/spdk-nbd.sock 00:04:41.811 14:05:40 -- common/autotest_common.sh@829 -- # '[' -z 57113 ']' 00:04:41.811 14:05:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:41.811 14:05:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:41.812 14:05:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:41.812 14:05:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:41.812 14:05:40 -- common/autotest_common.sh@10 -- # set +x 00:04:41.812 14:05:40 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:41.812 [2024-11-19 14:05:40.324353] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:41.812 [2024-11-19 14:05:40.324459] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57113 ] 00:04:42.073 [2024-11-19 14:05:40.470517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:42.073 [2024-11-19 14:05:40.618435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.073 [2024-11-19 14:05:40.618540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.645 14:05:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:42.645 14:05:41 -- common/autotest_common.sh@862 -- # return 0 00:04:42.645 14:05:41 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:42.906 Malloc0 00:04:42.906 14:05:41 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:43.167 Malloc1 00:04:43.167 14:05:41 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:43.167 14:05:41 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.167 14:05:41 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:43.167 14:05:41 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:43.167 14:05:41 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.167 14:05:41 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:43.167 14:05:41 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:43.167 14:05:41 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.167 14:05:41 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:43.167 14:05:41 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:43.167 14:05:41 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.167 14:05:41 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:43.167 14:05:41 -- bdev/nbd_common.sh@12 -- # local i 00:04:43.167 14:05:41 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:43.167 14:05:41 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:43.167 14:05:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:43.428 /dev/nbd0 00:04:43.428 14:05:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:43.428 14:05:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:43.428 14:05:41 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:43.428 14:05:41 -- common/autotest_common.sh@867 -- # local i 00:04:43.428 14:05:41 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:43.428 14:05:41 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:43.428 14:05:41 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:43.428 14:05:41 -- common/autotest_common.sh@871 -- # break 00:04:43.428 14:05:41 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:43.428 14:05:41 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:43.428 14:05:41 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:43.428 1+0 records in 00:04:43.428 1+0 records out 00:04:43.428 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225497 s, 18.2 MB/s 00:04:43.428 14:05:41 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:43.428 14:05:41 -- common/autotest_common.sh@884 -- # size=4096 00:04:43.428 14:05:41 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:43.428 14:05:41 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:43.428 14:05:41 -- common/autotest_common.sh@887 -- # return 0 00:04:43.428 14:05:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:43.428 14:05:41 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:43.428 14:05:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:43.688 /dev/nbd1 00:04:43.688 14:05:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:43.688 14:05:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:43.688 14:05:42 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:43.688 14:05:42 -- common/autotest_common.sh@867 -- # local i 00:04:43.688 14:05:42 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:43.688 14:05:42 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:43.688 14:05:42 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:43.688 14:05:42 -- common/autotest_common.sh@871 -- # break 00:04:43.688 14:05:42 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:43.688 14:05:42 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:43.688 14:05:42 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:43.688 1+0 records in 00:04:43.688 1+0 records out 00:04:43.688 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200764 s, 20.4 MB/s 00:04:43.688 14:05:42 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:43.688 14:05:42 -- common/autotest_common.sh@884 -- # size=4096 00:04:43.688 14:05:42 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:43.688 14:05:42 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:43.688 14:05:42 -- common/autotest_common.sh@887 -- # return 0 00:04:43.688 14:05:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:43.688 14:05:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:43.688 14:05:42 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:43.688 14:05:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.688 14:05:42 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:43.688 14:05:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:43.688 { 00:04:43.688 "nbd_device": "/dev/nbd0", 00:04:43.688 "bdev_name": "Malloc0" 00:04:43.688 }, 00:04:43.688 { 00:04:43.688 "nbd_device": "/dev/nbd1", 00:04:43.688 "bdev_name": "Malloc1" 00:04:43.688 } 00:04:43.688 ]' 00:04:43.688 14:05:42 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:43.688 { 00:04:43.688 "nbd_device": "/dev/nbd0", 00:04:43.688 "bdev_name": "Malloc0" 00:04:43.688 }, 00:04:43.688 { 00:04:43.688 "nbd_device": "/dev/nbd1", 00:04:43.688 "bdev_name": "Malloc1" 00:04:43.688 } 00:04:43.688 ]' 00:04:43.688 14:05:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:43.949 14:05:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:43.949 /dev/nbd1' 00:04:43.949 14:05:42 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:43.949 /dev/nbd1' 00:04:43.949 14:05:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:43.949 14:05:42 -- bdev/nbd_common.sh@65 -- # count=2 00:04:43.949 14:05:42 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:43.949 14:05:42 -- bdev/nbd_common.sh@95 -- # count=2 00:04:43.949 14:05:42 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:43.949 14:05:42 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:43.949 14:05:42 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.949 14:05:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:43.949 14:05:42 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:43.950 14:05:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:43.950 14:05:42 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:43.950 14:05:42 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:43.950 256+0 records in 00:04:43.950 256+0 records out 00:04:43.950 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00513658 s, 204 MB/s 00:04:43.950 14:05:42 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:43.950 14:05:42 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:43.950 256+0 records in 00:04:43.950 256+0 records out 00:04:43.950 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197475 s, 53.1 MB/s 00:04:43.950 14:05:42 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:43.950 14:05:42 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:43.950 256+0 records in 00:04:43.950 256+0 records out 00:04:43.950 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0193922 s, 54.1 MB/s 00:04:43.950 14:05:42 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:43.950 14:05:42 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.950 14:05:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:43.950 14:05:42 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:43.950 14:05:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:43.950 14:05:42 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:43.950 14:05:42 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:43.950 14:05:42 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:43.950 14:05:42 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:43.950 14:05:42 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:43.950 14:05:42 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:43.950 14:05:42 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:43.950 14:05:42 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:43.950 14:05:42 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.950 14:05:42 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.950 14:05:42 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:43.950 14:05:42 -- bdev/nbd_common.sh@51 -- # local i 00:04:43.950 14:05:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:43.950 14:05:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:44.212 14:05:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:44.212 14:05:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:44.212 14:05:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:44.212 14:05:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:44.212 14:05:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:44.212 14:05:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:44.212 14:05:42 -- bdev/nbd_common.sh@41 -- # break 00:04:44.212 14:05:42 -- bdev/nbd_common.sh@45 -- # return 0 00:04:44.212 14:05:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:44.212 14:05:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:44.212 14:05:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:44.212 14:05:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:44.212 14:05:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:44.212 14:05:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:44.212 14:05:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:44.212 14:05:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:44.212 14:05:42 -- bdev/nbd_common.sh@41 -- # break 00:04:44.212 14:05:42 -- bdev/nbd_common.sh@45 -- # return 0 00:04:44.212 14:05:42 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:44.212 14:05:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.212 14:05:42 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:44.473 14:05:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:44.473 14:05:42 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:44.473 14:05:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:44.473 14:05:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:44.473 14:05:42 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:44.473 14:05:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:44.473 14:05:42 -- bdev/nbd_common.sh@65 -- # true 00:04:44.473 14:05:42 -- bdev/nbd_common.sh@65 -- # count=0 00:04:44.473 14:05:42 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:44.473 14:05:42 -- bdev/nbd_common.sh@104 -- # count=0 00:04:44.473 14:05:42 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:44.473 14:05:42 -- bdev/nbd_common.sh@109 -- # return 0 00:04:44.473 14:05:42 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:44.734 14:05:43 -- event/event.sh@35 -- # sleep 3 00:04:45.305 [2024-11-19 14:05:43.822963] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:45.567 [2024-11-19 14:05:43.969283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.567 [2024-11-19 14:05:43.969387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.567 [2024-11-19 14:05:44.073766] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:45.567 [2024-11-19 14:05:44.073823] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:48.109 spdk_app_start Round 1 00:04:48.109 14:05:46 -- event/event.sh@23 -- # for i in {0..2} 00:04:48.109 14:05:46 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:48.109 14:05:46 -- event/event.sh@25 -- # waitforlisten 57113 /var/tmp/spdk-nbd.sock 00:04:48.109 14:05:46 -- common/autotest_common.sh@829 -- # '[' -z 57113 ']' 00:04:48.109 14:05:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:48.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:48.109 14:05:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:48.109 14:05:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:48.109 14:05:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:48.109 14:05:46 -- common/autotest_common.sh@10 -- # set +x 00:04:48.109 14:05:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:48.109 14:05:46 -- common/autotest_common.sh@862 -- # return 0 00:04:48.109 14:05:46 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:48.109 Malloc0 00:04:48.109 14:05:46 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:48.367 Malloc1 00:04:48.367 14:05:46 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:48.367 14:05:46 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.367 14:05:46 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:48.367 14:05:46 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:48.367 14:05:46 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.367 14:05:46 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:48.367 14:05:46 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:48.367 14:05:46 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.367 14:05:46 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:48.367 14:05:46 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:48.367 14:05:46 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.367 14:05:46 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:48.367 14:05:46 -- bdev/nbd_common.sh@12 -- # local i 00:04:48.367 14:05:46 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:48.367 14:05:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:48.367 14:05:46 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:48.625 /dev/nbd0 00:04:48.625 14:05:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:48.625 14:05:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:48.625 14:05:47 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:48.625 14:05:47 -- common/autotest_common.sh@867 -- # local i 00:04:48.625 14:05:47 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:48.625 14:05:47 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:48.625 14:05:47 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:48.625 14:05:47 -- common/autotest_common.sh@871 -- # break 00:04:48.625 14:05:47 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:48.625 14:05:47 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:48.625 14:05:47 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:48.625 1+0 records in 00:04:48.625 1+0 records out 00:04:48.625 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000167055 s, 24.5 MB/s 00:04:48.625 14:05:47 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:48.625 14:05:47 -- common/autotest_common.sh@884 -- # size=4096 00:04:48.625 14:05:47 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:48.625 14:05:47 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:48.625 14:05:47 -- common/autotest_common.sh@887 -- # return 0 00:04:48.625 14:05:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:48.625 14:05:47 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:48.625 14:05:47 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:48.884 /dev/nbd1 00:04:48.884 14:05:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:48.884 14:05:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:48.884 14:05:47 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:48.884 14:05:47 -- common/autotest_common.sh@867 -- # local i 00:04:48.884 14:05:47 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:48.884 14:05:47 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:48.884 14:05:47 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:48.884 14:05:47 -- common/autotest_common.sh@871 -- # break 00:04:48.884 14:05:47 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:48.884 14:05:47 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:48.884 14:05:47 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:48.884 1+0 records in 00:04:48.884 1+0 records out 00:04:48.884 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278668 s, 14.7 MB/s 00:04:48.884 14:05:47 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:48.884 14:05:47 -- common/autotest_common.sh@884 -- # size=4096 00:04:48.884 14:05:47 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:48.884 14:05:47 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:48.884 14:05:47 -- common/autotest_common.sh@887 -- # return 0 00:04:48.884 14:05:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:48.884 14:05:47 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:48.884 14:05:47 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:48.884 14:05:47 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.884 14:05:47 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:48.884 14:05:47 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:48.884 { 00:04:48.884 "nbd_device": "/dev/nbd0", 00:04:48.884 "bdev_name": "Malloc0" 00:04:48.884 }, 00:04:48.884 { 00:04:48.884 "nbd_device": "/dev/nbd1", 00:04:48.884 "bdev_name": "Malloc1" 00:04:48.884 } 00:04:48.884 ]' 00:04:48.884 14:05:47 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:48.884 { 00:04:48.884 "nbd_device": "/dev/nbd0", 00:04:48.884 "bdev_name": "Malloc0" 00:04:48.884 }, 00:04:48.884 { 00:04:48.884 "nbd_device": "/dev/nbd1", 00:04:48.884 "bdev_name": "Malloc1" 00:04:48.884 } 00:04:48.884 ]' 00:04:48.884 14:05:47 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:48.884 14:05:47 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:48.884 /dev/nbd1' 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:49.143 /dev/nbd1' 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@65 -- # count=2 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@95 -- # count=2 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:49.143 256+0 records in 00:04:49.143 256+0 records out 00:04:49.143 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00960978 s, 109 MB/s 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:49.143 256+0 records in 00:04:49.143 256+0 records out 00:04:49.143 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126972 s, 82.6 MB/s 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:49.143 256+0 records in 00:04:49.143 256+0 records out 00:04:49.143 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0168505 s, 62.2 MB/s 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@51 -- # local i 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@41 -- # break 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@45 -- # return 0 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:49.143 14:05:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:49.403 14:05:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:49.403 14:05:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:49.403 14:05:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:49.403 14:05:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:49.403 14:05:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:49.403 14:05:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:49.403 14:05:47 -- bdev/nbd_common.sh@41 -- # break 00:04:49.403 14:05:47 -- bdev/nbd_common.sh@45 -- # return 0 00:04:49.403 14:05:47 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:49.403 14:05:47 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.403 14:05:47 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:49.662 14:05:48 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:49.662 14:05:48 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:49.662 14:05:48 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:49.663 14:05:48 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:49.663 14:05:48 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:49.663 14:05:48 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:49.663 14:05:48 -- bdev/nbd_common.sh@65 -- # true 00:04:49.663 14:05:48 -- bdev/nbd_common.sh@65 -- # count=0 00:04:49.663 14:05:48 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:49.663 14:05:48 -- bdev/nbd_common.sh@104 -- # count=0 00:04:49.663 14:05:48 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:49.663 14:05:48 -- bdev/nbd_common.sh@109 -- # return 0 00:04:49.663 14:05:48 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:49.921 14:05:48 -- event/event.sh@35 -- # sleep 3 00:04:50.854 [2024-11-19 14:05:49.077476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:50.854 [2024-11-19 14:05:49.210465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.854 [2024-11-19 14:05:49.210480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.854 [2024-11-19 14:05:49.314160] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:50.854 [2024-11-19 14:05:49.314218] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:53.381 spdk_app_start Round 2 00:04:53.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:53.381 14:05:51 -- event/event.sh@23 -- # for i in {0..2} 00:04:53.381 14:05:51 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:53.381 14:05:51 -- event/event.sh@25 -- # waitforlisten 57113 /var/tmp/spdk-nbd.sock 00:04:53.381 14:05:51 -- common/autotest_common.sh@829 -- # '[' -z 57113 ']' 00:04:53.381 14:05:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:53.381 14:05:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:53.381 14:05:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:53.381 14:05:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:53.381 14:05:51 -- common/autotest_common.sh@10 -- # set +x 00:04:53.381 14:05:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:53.381 14:05:51 -- common/autotest_common.sh@862 -- # return 0 00:04:53.381 14:05:51 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:53.381 Malloc0 00:04:53.381 14:05:51 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:53.681 Malloc1 00:04:53.681 14:05:52 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:53.681 14:05:52 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.681 14:05:52 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.681 14:05:52 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:53.681 14:05:52 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.681 14:05:52 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:53.681 14:05:52 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:53.681 14:05:52 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.681 14:05:52 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.681 14:05:52 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:53.681 14:05:52 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.681 14:05:52 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:53.681 14:05:52 -- bdev/nbd_common.sh@12 -- # local i 00:04:53.681 14:05:52 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:53.681 14:05:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.681 14:05:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:53.939 /dev/nbd0 00:04:53.939 14:05:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:53.939 14:05:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:53.939 14:05:52 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:53.939 14:05:52 -- common/autotest_common.sh@867 -- # local i 00:04:53.939 14:05:52 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:53.939 14:05:52 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:53.939 14:05:52 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:53.939 14:05:52 -- common/autotest_common.sh@871 -- # break 00:04:53.939 14:05:52 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:53.939 14:05:52 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:53.939 14:05:52 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:53.939 1+0 records in 00:04:53.939 1+0 records out 00:04:53.939 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263952 s, 15.5 MB/s 00:04:53.939 14:05:52 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:53.939 14:05:52 -- common/autotest_common.sh@884 -- # size=4096 00:04:53.939 14:05:52 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:53.939 14:05:52 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:53.939 14:05:52 -- common/autotest_common.sh@887 -- # return 0 00:04:53.939 14:05:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:53.939 14:05:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.939 14:05:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:54.196 /dev/nbd1 00:04:54.196 14:05:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:54.196 14:05:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:54.196 14:05:52 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:54.196 14:05:52 -- common/autotest_common.sh@867 -- # local i 00:04:54.196 14:05:52 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:54.196 14:05:52 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:54.196 14:05:52 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:54.196 14:05:52 -- common/autotest_common.sh@871 -- # break 00:04:54.196 14:05:52 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:54.196 14:05:52 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:54.196 14:05:52 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:54.196 1+0 records in 00:04:54.196 1+0 records out 00:04:54.196 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241858 s, 16.9 MB/s 00:04:54.196 14:05:52 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:54.196 14:05:52 -- common/autotest_common.sh@884 -- # size=4096 00:04:54.196 14:05:52 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:54.196 14:05:52 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:54.196 14:05:52 -- common/autotest_common.sh@887 -- # return 0 00:04:54.196 14:05:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:54.196 14:05:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.196 14:05:52 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:54.196 14:05:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.196 14:05:52 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:54.196 14:05:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:54.196 { 00:04:54.196 "nbd_device": "/dev/nbd0", 00:04:54.196 "bdev_name": "Malloc0" 00:04:54.196 }, 00:04:54.196 { 00:04:54.196 "nbd_device": "/dev/nbd1", 00:04:54.196 "bdev_name": "Malloc1" 00:04:54.196 } 00:04:54.196 ]' 00:04:54.196 14:05:52 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:54.196 { 00:04:54.196 "nbd_device": "/dev/nbd0", 00:04:54.196 "bdev_name": "Malloc0" 00:04:54.196 }, 00:04:54.196 { 00:04:54.196 "nbd_device": "/dev/nbd1", 00:04:54.196 "bdev_name": "Malloc1" 00:04:54.196 } 00:04:54.196 ]' 00:04:54.196 14:05:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:54.454 14:05:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:54.454 /dev/nbd1' 00:04:54.454 14:05:52 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:54.454 /dev/nbd1' 00:04:54.454 14:05:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:54.454 14:05:52 -- bdev/nbd_common.sh@65 -- # count=2 00:04:54.454 14:05:52 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:54.454 14:05:52 -- bdev/nbd_common.sh@95 -- # count=2 00:04:54.454 14:05:52 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:54.454 14:05:52 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:54.454 14:05:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.454 14:05:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:54.454 14:05:52 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:54.454 14:05:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:54.454 14:05:52 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:54.454 14:05:52 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:54.454 256+0 records in 00:04:54.454 256+0 records out 00:04:54.454 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00732745 s, 143 MB/s 00:04:54.454 14:05:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:54.454 14:05:52 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:54.454 256+0 records in 00:04:54.454 256+0 records out 00:04:54.454 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020673 s, 50.7 MB/s 00:04:54.454 14:05:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:54.454 14:05:52 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:54.454 256+0 records in 00:04:54.454 256+0 records out 00:04:54.454 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0188811 s, 55.5 MB/s 00:04:54.454 14:05:52 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:54.454 14:05:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.454 14:05:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:54.455 14:05:52 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:54.455 14:05:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:54.455 14:05:52 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:54.455 14:05:52 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:54.455 14:05:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:54.455 14:05:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:54.455 14:05:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:54.455 14:05:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:54.455 14:05:52 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:54.455 14:05:52 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:54.455 14:05:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.455 14:05:52 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.455 14:05:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:54.455 14:05:52 -- bdev/nbd_common.sh@51 -- # local i 00:04:54.455 14:05:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:54.455 14:05:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:54.713 14:05:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:54.713 14:05:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:54.713 14:05:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:54.713 14:05:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:54.713 14:05:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:54.713 14:05:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:54.713 14:05:53 -- bdev/nbd_common.sh@41 -- # break 00:04:54.713 14:05:53 -- bdev/nbd_common.sh@45 -- # return 0 00:04:54.713 14:05:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:54.713 14:05:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:54.713 14:05:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:54.713 14:05:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:54.713 14:05:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:54.713 14:05:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:54.713 14:05:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:54.713 14:05:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:54.713 14:05:53 -- bdev/nbd_common.sh@41 -- # break 00:04:54.713 14:05:53 -- bdev/nbd_common.sh@45 -- # return 0 00:04:54.713 14:05:53 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:54.713 14:05:53 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.713 14:05:53 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:54.970 14:05:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:54.970 14:05:53 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:54.970 14:05:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:54.970 14:05:53 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:54.970 14:05:53 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:54.970 14:05:53 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:54.970 14:05:53 -- bdev/nbd_common.sh@65 -- # true 00:04:54.971 14:05:53 -- bdev/nbd_common.sh@65 -- # count=0 00:04:54.971 14:05:53 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:54.971 14:05:53 -- bdev/nbd_common.sh@104 -- # count=0 00:04:54.971 14:05:53 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:54.971 14:05:53 -- bdev/nbd_common.sh@109 -- # return 0 00:04:54.971 14:05:53 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:55.229 14:05:53 -- event/event.sh@35 -- # sleep 3 00:04:56.162 [2024-11-19 14:05:54.444304] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:56.162 [2024-11-19 14:05:54.615180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.162 [2024-11-19 14:05:54.615191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.420 [2024-11-19 14:05:54.731531] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:56.420 [2024-11-19 14:05:54.731591] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:58.320 14:05:56 -- event/event.sh@38 -- # waitforlisten 57113 /var/tmp/spdk-nbd.sock 00:04:58.320 14:05:56 -- common/autotest_common.sh@829 -- # '[' -z 57113 ']' 00:04:58.320 14:05:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:58.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:58.320 14:05:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:58.320 14:05:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:58.320 14:05:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:58.320 14:05:56 -- common/autotest_common.sh@10 -- # set +x 00:04:58.578 14:05:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.578 14:05:56 -- common/autotest_common.sh@862 -- # return 0 00:04:58.578 14:05:56 -- event/event.sh@39 -- # killprocess 57113 00:04:58.578 14:05:56 -- common/autotest_common.sh@936 -- # '[' -z 57113 ']' 00:04:58.578 14:05:56 -- common/autotest_common.sh@940 -- # kill -0 57113 00:04:58.578 14:05:56 -- common/autotest_common.sh@941 -- # uname 00:04:58.578 14:05:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:58.578 14:05:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57113 00:04:58.578 killing process with pid 57113 00:04:58.578 14:05:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:58.578 14:05:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:58.578 14:05:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57113' 00:04:58.578 14:05:56 -- common/autotest_common.sh@955 -- # kill 57113 00:04:58.578 14:05:56 -- common/autotest_common.sh@960 -- # wait 57113 00:04:59.145 spdk_app_start is called in Round 0. 00:04:59.145 Shutdown signal received, stop current app iteration 00:04:59.145 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:04:59.145 spdk_app_start is called in Round 1. 00:04:59.145 Shutdown signal received, stop current app iteration 00:04:59.145 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:04:59.145 spdk_app_start is called in Round 2. 00:04:59.145 Shutdown signal received, stop current app iteration 00:04:59.145 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:04:59.145 spdk_app_start is called in Round 3. 00:04:59.145 Shutdown signal received, stop current app iteration 00:04:59.145 ************************************ 00:04:59.145 END TEST app_repeat 00:04:59.145 ************************************ 00:04:59.145 14:05:57 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:59.145 14:05:57 -- event/event.sh@42 -- # return 0 00:04:59.145 00:04:59.145 real 0m17.332s 00:04:59.145 user 0m37.013s 00:04:59.145 sys 0m2.021s 00:04:59.145 14:05:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:59.145 14:05:57 -- common/autotest_common.sh@10 -- # set +x 00:04:59.145 14:05:57 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:59.145 14:05:57 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:59.145 14:05:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:59.145 14:05:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.145 14:05:57 -- common/autotest_common.sh@10 -- # set +x 00:04:59.145 ************************************ 00:04:59.145 START TEST cpu_locks 00:04:59.145 ************************************ 00:04:59.145 14:05:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:59.402 * Looking for test storage... 00:04:59.402 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:59.402 14:05:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:59.402 14:05:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:59.402 14:05:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:59.402 14:05:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:59.402 14:05:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:59.403 14:05:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:59.403 14:05:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:59.403 14:05:57 -- scripts/common.sh@335 -- # IFS=.-: 00:04:59.403 14:05:57 -- scripts/common.sh@335 -- # read -ra ver1 00:04:59.403 14:05:57 -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.403 14:05:57 -- scripts/common.sh@336 -- # read -ra ver2 00:04:59.403 14:05:57 -- scripts/common.sh@337 -- # local 'op=<' 00:04:59.403 14:05:57 -- scripts/common.sh@339 -- # ver1_l=2 00:04:59.403 14:05:57 -- scripts/common.sh@340 -- # ver2_l=1 00:04:59.403 14:05:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:59.403 14:05:57 -- scripts/common.sh@343 -- # case "$op" in 00:04:59.403 14:05:57 -- scripts/common.sh@344 -- # : 1 00:04:59.403 14:05:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:59.403 14:05:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.403 14:05:57 -- scripts/common.sh@364 -- # decimal 1 00:04:59.403 14:05:57 -- scripts/common.sh@352 -- # local d=1 00:04:59.403 14:05:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.403 14:05:57 -- scripts/common.sh@354 -- # echo 1 00:04:59.403 14:05:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:59.403 14:05:57 -- scripts/common.sh@365 -- # decimal 2 00:04:59.403 14:05:57 -- scripts/common.sh@352 -- # local d=2 00:04:59.403 14:05:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.403 14:05:57 -- scripts/common.sh@354 -- # echo 2 00:04:59.403 14:05:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:59.403 14:05:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:59.403 14:05:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:59.403 14:05:57 -- scripts/common.sh@367 -- # return 0 00:04:59.403 14:05:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.403 14:05:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:59.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.403 --rc genhtml_branch_coverage=1 00:04:59.403 --rc genhtml_function_coverage=1 00:04:59.403 --rc genhtml_legend=1 00:04:59.403 --rc geninfo_all_blocks=1 00:04:59.403 --rc geninfo_unexecuted_blocks=1 00:04:59.403 00:04:59.403 ' 00:04:59.403 14:05:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:59.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.403 --rc genhtml_branch_coverage=1 00:04:59.403 --rc genhtml_function_coverage=1 00:04:59.403 --rc genhtml_legend=1 00:04:59.403 --rc geninfo_all_blocks=1 00:04:59.403 --rc geninfo_unexecuted_blocks=1 00:04:59.403 00:04:59.403 ' 00:04:59.403 14:05:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:59.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.403 --rc genhtml_branch_coverage=1 00:04:59.403 --rc genhtml_function_coverage=1 00:04:59.403 --rc genhtml_legend=1 00:04:59.403 --rc geninfo_all_blocks=1 00:04:59.403 --rc geninfo_unexecuted_blocks=1 00:04:59.403 00:04:59.403 ' 00:04:59.403 14:05:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:59.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.403 --rc genhtml_branch_coverage=1 00:04:59.403 --rc genhtml_function_coverage=1 00:04:59.403 --rc genhtml_legend=1 00:04:59.403 --rc geninfo_all_blocks=1 00:04:59.403 --rc geninfo_unexecuted_blocks=1 00:04:59.403 00:04:59.403 ' 00:04:59.403 14:05:57 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:59.403 14:05:57 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:59.403 14:05:57 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:59.403 14:05:57 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:59.403 14:05:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:59.403 14:05:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.403 14:05:57 -- common/autotest_common.sh@10 -- # set +x 00:04:59.403 ************************************ 00:04:59.403 START TEST default_locks 00:04:59.403 ************************************ 00:04:59.403 14:05:57 -- common/autotest_common.sh@1114 -- # default_locks 00:04:59.403 14:05:57 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=57537 00:04:59.403 14:05:57 -- event/cpu_locks.sh@47 -- # waitforlisten 57537 00:04:59.403 14:05:57 -- common/autotest_common.sh@829 -- # '[' -z 57537 ']' 00:04:59.403 14:05:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.403 14:05:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.403 14:05:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.403 14:05:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.403 14:05:57 -- common/autotest_common.sh@10 -- # set +x 00:04:59.403 14:05:57 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:59.403 [2024-11-19 14:05:57.862506] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:59.403 [2024-11-19 14:05:57.862607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57537 ] 00:04:59.661 [2024-11-19 14:05:58.010302] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.661 [2024-11-19 14:05:58.178543] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:59.661 [2024-11-19 14:05:58.178737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.033 14:05:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:01.033 14:05:59 -- common/autotest_common.sh@862 -- # return 0 00:05:01.033 14:05:59 -- event/cpu_locks.sh@49 -- # locks_exist 57537 00:05:01.033 14:05:59 -- event/cpu_locks.sh@22 -- # lslocks -p 57537 00:05:01.033 14:05:59 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:01.033 14:05:59 -- event/cpu_locks.sh@50 -- # killprocess 57537 00:05:01.033 14:05:59 -- common/autotest_common.sh@936 -- # '[' -z 57537 ']' 00:05:01.033 14:05:59 -- common/autotest_common.sh@940 -- # kill -0 57537 00:05:01.033 14:05:59 -- common/autotest_common.sh@941 -- # uname 00:05:01.033 14:05:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:01.033 14:05:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57537 00:05:01.033 14:05:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:01.033 14:05:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:01.033 killing process with pid 57537 00:05:01.033 14:05:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57537' 00:05:01.033 14:05:59 -- common/autotest_common.sh@955 -- # kill 57537 00:05:01.033 14:05:59 -- common/autotest_common.sh@960 -- # wait 57537 00:05:02.407 14:06:00 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 57537 00:05:02.407 14:06:00 -- common/autotest_common.sh@650 -- # local es=0 00:05:02.407 14:06:00 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57537 00:05:02.407 14:06:00 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:02.407 14:06:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:02.407 14:06:00 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:02.407 14:06:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:02.407 14:06:00 -- common/autotest_common.sh@653 -- # waitforlisten 57537 00:05:02.407 14:06:00 -- common/autotest_common.sh@829 -- # '[' -z 57537 ']' 00:05:02.407 14:06:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.407 14:06:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:02.407 14:06:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.407 14:06:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:02.407 14:06:00 -- common/autotest_common.sh@10 -- # set +x 00:05:02.407 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57537) - No such process 00:05:02.407 ERROR: process (pid: 57537) is no longer running 00:05:02.407 14:06:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:02.407 14:06:00 -- common/autotest_common.sh@862 -- # return 1 00:05:02.407 14:06:00 -- common/autotest_common.sh@653 -- # es=1 00:05:02.407 14:06:00 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:02.407 14:06:00 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:02.407 14:06:00 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:02.407 14:06:00 -- event/cpu_locks.sh@54 -- # no_locks 00:05:02.407 14:06:00 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:02.407 14:06:00 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:02.407 14:06:00 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:02.407 00:05:02.407 real 0m2.995s 00:05:02.407 user 0m3.091s 00:05:02.407 sys 0m0.460s 00:05:02.407 14:06:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:02.408 14:06:00 -- common/autotest_common.sh@10 -- # set +x 00:05:02.408 ************************************ 00:05:02.408 END TEST default_locks 00:05:02.408 ************************************ 00:05:02.408 14:06:00 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:02.408 14:06:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:02.408 14:06:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:02.408 14:06:00 -- common/autotest_common.sh@10 -- # set +x 00:05:02.408 ************************************ 00:05:02.408 START TEST default_locks_via_rpc 00:05:02.408 ************************************ 00:05:02.408 14:06:00 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:05:02.408 14:06:00 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=57603 00:05:02.408 14:06:00 -- event/cpu_locks.sh@63 -- # waitforlisten 57603 00:05:02.408 14:06:00 -- common/autotest_common.sh@829 -- # '[' -z 57603 ']' 00:05:02.408 14:06:00 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:02.408 14:06:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.408 14:06:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:02.408 14:06:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.408 14:06:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:02.408 14:06:00 -- common/autotest_common.sh@10 -- # set +x 00:05:02.408 [2024-11-19 14:06:00.885671] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:02.408 [2024-11-19 14:06:00.885759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57603 ] 00:05:02.668 [2024-11-19 14:06:01.029272] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.928 [2024-11-19 14:06:01.228759] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:02.928 [2024-11-19 14:06:01.228988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.941 14:06:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:03.941 14:06:02 -- common/autotest_common.sh@862 -- # return 0 00:05:03.941 14:06:02 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:03.941 14:06:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.941 14:06:02 -- common/autotest_common.sh@10 -- # set +x 00:05:03.941 14:06:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.941 14:06:02 -- event/cpu_locks.sh@67 -- # no_locks 00:05:03.941 14:06:02 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:03.941 14:06:02 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:03.941 14:06:02 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:03.941 14:06:02 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:03.941 14:06:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.941 14:06:02 -- common/autotest_common.sh@10 -- # set +x 00:05:03.941 14:06:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.941 14:06:02 -- event/cpu_locks.sh@71 -- # locks_exist 57603 00:05:03.941 14:06:02 -- event/cpu_locks.sh@22 -- # lslocks -p 57603 00:05:03.941 14:06:02 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:04.202 14:06:02 -- event/cpu_locks.sh@73 -- # killprocess 57603 00:05:04.202 14:06:02 -- common/autotest_common.sh@936 -- # '[' -z 57603 ']' 00:05:04.202 14:06:02 -- common/autotest_common.sh@940 -- # kill -0 57603 00:05:04.202 14:06:02 -- common/autotest_common.sh@941 -- # uname 00:05:04.202 14:06:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:04.202 14:06:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57603 00:05:04.202 14:06:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:04.202 killing process with pid 57603 00:05:04.202 14:06:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:04.202 14:06:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57603' 00:05:04.202 14:06:02 -- common/autotest_common.sh@955 -- # kill 57603 00:05:04.202 14:06:02 -- common/autotest_common.sh@960 -- # wait 57603 00:05:05.581 00:05:05.581 real 0m3.095s 00:05:05.581 user 0m3.193s 00:05:05.581 sys 0m0.475s 00:05:05.581 14:06:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:05.581 14:06:03 -- common/autotest_common.sh@10 -- # set +x 00:05:05.581 ************************************ 00:05:05.581 END TEST default_locks_via_rpc 00:05:05.581 ************************************ 00:05:05.581 14:06:03 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:05.581 14:06:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.581 14:06:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.581 14:06:03 -- common/autotest_common.sh@10 -- # set +x 00:05:05.581 ************************************ 00:05:05.581 START TEST non_locking_app_on_locked_coremask 00:05:05.581 ************************************ 00:05:05.581 14:06:03 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:05:05.581 14:06:03 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=57674 00:05:05.581 14:06:03 -- event/cpu_locks.sh@81 -- # waitforlisten 57674 /var/tmp/spdk.sock 00:05:05.581 14:06:03 -- common/autotest_common.sh@829 -- # '[' -z 57674 ']' 00:05:05.581 14:06:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.581 14:06:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:05.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.581 14:06:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.581 14:06:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:05.581 14:06:03 -- common/autotest_common.sh@10 -- # set +x 00:05:05.581 14:06:03 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:05.581 [2024-11-19 14:06:04.031727] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:05.581 [2024-11-19 14:06:04.031836] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57674 ] 00:05:05.841 [2024-11-19 14:06:04.176919] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.841 [2024-11-19 14:06:04.341044] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:05.841 [2024-11-19 14:06:04.341223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.217 14:06:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.217 14:06:05 -- common/autotest_common.sh@862 -- # return 0 00:05:07.217 14:06:05 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=57697 00:05:07.217 14:06:05 -- event/cpu_locks.sh@85 -- # waitforlisten 57697 /var/tmp/spdk2.sock 00:05:07.217 14:06:05 -- common/autotest_common.sh@829 -- # '[' -z 57697 ']' 00:05:07.217 14:06:05 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:07.217 14:06:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:07.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:07.217 14:06:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.217 14:06:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:07.217 14:06:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.217 14:06:05 -- common/autotest_common.sh@10 -- # set +x 00:05:07.217 [2024-11-19 14:06:05.565424] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:07.217 [2024-11-19 14:06:05.565541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57697 ] 00:05:07.217 [2024-11-19 14:06:05.714308] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:07.217 [2024-11-19 14:06:05.714353] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.784 [2024-11-19 14:06:06.067129] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:07.784 [2024-11-19 14:06:06.067310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.720 14:06:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:08.720 14:06:07 -- common/autotest_common.sh@862 -- # return 0 00:05:08.720 14:06:07 -- event/cpu_locks.sh@87 -- # locks_exist 57674 00:05:08.720 14:06:07 -- event/cpu_locks.sh@22 -- # lslocks -p 57674 00:05:08.720 14:06:07 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:08.978 14:06:07 -- event/cpu_locks.sh@89 -- # killprocess 57674 00:05:08.978 14:06:07 -- common/autotest_common.sh@936 -- # '[' -z 57674 ']' 00:05:08.978 14:06:07 -- common/autotest_common.sh@940 -- # kill -0 57674 00:05:08.978 14:06:07 -- common/autotest_common.sh@941 -- # uname 00:05:08.978 14:06:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:08.978 14:06:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57674 00:05:08.978 14:06:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:08.978 killing process with pid 57674 00:05:08.978 14:06:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:08.978 14:06:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57674' 00:05:08.978 14:06:07 -- common/autotest_common.sh@955 -- # kill 57674 00:05:08.978 14:06:07 -- common/autotest_common.sh@960 -- # wait 57674 00:05:11.507 14:06:09 -- event/cpu_locks.sh@90 -- # killprocess 57697 00:05:11.507 14:06:09 -- common/autotest_common.sh@936 -- # '[' -z 57697 ']' 00:05:11.507 14:06:09 -- common/autotest_common.sh@940 -- # kill -0 57697 00:05:11.507 14:06:09 -- common/autotest_common.sh@941 -- # uname 00:05:11.507 14:06:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:11.507 14:06:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57697 00:05:11.507 killing process with pid 57697 00:05:11.507 14:06:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:11.507 14:06:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:11.507 14:06:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57697' 00:05:11.507 14:06:09 -- common/autotest_common.sh@955 -- # kill 57697 00:05:11.507 14:06:09 -- common/autotest_common.sh@960 -- # wait 57697 00:05:12.880 00:05:12.880 real 0m7.183s 00:05:12.880 user 0m7.587s 00:05:12.880 sys 0m0.923s 00:05:12.880 14:06:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:12.880 14:06:11 -- common/autotest_common.sh@10 -- # set +x 00:05:12.880 ************************************ 00:05:12.880 END TEST non_locking_app_on_locked_coremask 00:05:12.880 ************************************ 00:05:12.880 14:06:11 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:12.880 14:06:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:12.880 14:06:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:12.880 14:06:11 -- common/autotest_common.sh@10 -- # set +x 00:05:12.880 ************************************ 00:05:12.880 START TEST locking_app_on_unlocked_coremask 00:05:12.880 ************************************ 00:05:12.880 14:06:11 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:05:12.880 14:06:11 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=57796 00:05:12.880 14:06:11 -- event/cpu_locks.sh@99 -- # waitforlisten 57796 /var/tmp/spdk.sock 00:05:12.880 14:06:11 -- common/autotest_common.sh@829 -- # '[' -z 57796 ']' 00:05:12.880 14:06:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.880 14:06:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.880 14:06:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.880 14:06:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.880 14:06:11 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:12.880 14:06:11 -- common/autotest_common.sh@10 -- # set +x 00:05:12.880 [2024-11-19 14:06:11.259972] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:12.880 [2024-11-19 14:06:11.260082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57796 ] 00:05:12.880 [2024-11-19 14:06:11.408186] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:12.880 [2024-11-19 14:06:11.408231] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.146 [2024-11-19 14:06:11.575859] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:13.146 [2024-11-19 14:06:11.576070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.766 14:06:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.766 14:06:12 -- common/autotest_common.sh@862 -- # return 0 00:05:13.766 14:06:12 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=57806 00:05:13.766 14:06:12 -- event/cpu_locks.sh@103 -- # waitforlisten 57806 /var/tmp/spdk2.sock 00:05:13.766 14:06:12 -- common/autotest_common.sh@829 -- # '[' -z 57806 ']' 00:05:13.766 14:06:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:13.766 14:06:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:13.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:13.766 14:06:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:13.766 14:06:12 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:13.766 14:06:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:13.766 14:06:12 -- common/autotest_common.sh@10 -- # set +x 00:05:13.766 [2024-11-19 14:06:12.141926] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:13.766 [2024-11-19 14:06:12.142039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57806 ] 00:05:13.766 [2024-11-19 14:06:12.292140] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.332 [2024-11-19 14:06:12.621535] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:14.332 [2024-11-19 14:06:12.621712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.267 14:06:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.267 14:06:13 -- common/autotest_common.sh@862 -- # return 0 00:05:15.267 14:06:13 -- event/cpu_locks.sh@105 -- # locks_exist 57806 00:05:15.267 14:06:13 -- event/cpu_locks.sh@22 -- # lslocks -p 57806 00:05:15.267 14:06:13 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:15.524 14:06:13 -- event/cpu_locks.sh@107 -- # killprocess 57796 00:05:15.524 14:06:13 -- common/autotest_common.sh@936 -- # '[' -z 57796 ']' 00:05:15.524 14:06:13 -- common/autotest_common.sh@940 -- # kill -0 57796 00:05:15.524 14:06:13 -- common/autotest_common.sh@941 -- # uname 00:05:15.524 14:06:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:15.524 14:06:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57796 00:05:15.524 14:06:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:15.524 14:06:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:15.524 killing process with pid 57796 00:05:15.524 14:06:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57796' 00:05:15.524 14:06:13 -- common/autotest_common.sh@955 -- # kill 57796 00:05:15.524 14:06:13 -- common/autotest_common.sh@960 -- # wait 57796 00:05:18.056 14:06:16 -- event/cpu_locks.sh@108 -- # killprocess 57806 00:05:18.056 14:06:16 -- common/autotest_common.sh@936 -- # '[' -z 57806 ']' 00:05:18.056 14:06:16 -- common/autotest_common.sh@940 -- # kill -0 57806 00:05:18.056 14:06:16 -- common/autotest_common.sh@941 -- # uname 00:05:18.056 14:06:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:18.056 14:06:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57806 00:05:18.056 14:06:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:18.056 14:06:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:18.056 killing process with pid 57806 00:05:18.056 14:06:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57806' 00:05:18.056 14:06:16 -- common/autotest_common.sh@955 -- # kill 57806 00:05:18.056 14:06:16 -- common/autotest_common.sh@960 -- # wait 57806 00:05:19.431 00:05:19.431 real 0m6.557s 00:05:19.431 user 0m6.850s 00:05:19.431 sys 0m0.910s 00:05:19.431 14:06:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:19.431 14:06:17 -- common/autotest_common.sh@10 -- # set +x 00:05:19.431 ************************************ 00:05:19.431 END TEST locking_app_on_unlocked_coremask 00:05:19.431 ************************************ 00:05:19.431 14:06:17 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:19.431 14:06:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:19.431 14:06:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.431 14:06:17 -- common/autotest_common.sh@10 -- # set +x 00:05:19.431 ************************************ 00:05:19.431 START TEST locking_app_on_locked_coremask 00:05:19.431 ************************************ 00:05:19.431 14:06:17 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:05:19.431 14:06:17 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=57905 00:05:19.431 14:06:17 -- event/cpu_locks.sh@116 -- # waitforlisten 57905 /var/tmp/spdk.sock 00:05:19.431 14:06:17 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.431 14:06:17 -- common/autotest_common.sh@829 -- # '[' -z 57905 ']' 00:05:19.431 14:06:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.431 14:06:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.431 14:06:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.431 14:06:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.431 14:06:17 -- common/autotest_common.sh@10 -- # set +x 00:05:19.431 [2024-11-19 14:06:17.859495] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:19.431 [2024-11-19 14:06:17.859604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57905 ] 00:05:19.690 [2024-11-19 14:06:18.006229] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.690 [2024-11-19 14:06:18.170565] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:19.690 [2024-11-19 14:06:18.170760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.256 14:06:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:20.256 14:06:18 -- common/autotest_common.sh@862 -- # return 0 00:05:20.256 14:06:18 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=57921 00:05:20.256 14:06:18 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 57921 /var/tmp/spdk2.sock 00:05:20.256 14:06:18 -- common/autotest_common.sh@650 -- # local es=0 00:05:20.256 14:06:18 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57921 /var/tmp/spdk2.sock 00:05:20.256 14:06:18 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:20.256 14:06:18 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:20.256 14:06:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:20.256 14:06:18 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:20.256 14:06:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:20.256 14:06:18 -- common/autotest_common.sh@653 -- # waitforlisten 57921 /var/tmp/spdk2.sock 00:05:20.256 14:06:18 -- common/autotest_common.sh@829 -- # '[' -z 57921 ']' 00:05:20.256 14:06:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:20.256 14:06:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.256 14:06:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:20.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:20.256 14:06:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.257 14:06:18 -- common/autotest_common.sh@10 -- # set +x 00:05:20.257 [2024-11-19 14:06:18.737704] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:20.257 [2024-11-19 14:06:18.737818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57921 ] 00:05:20.515 [2024-11-19 14:06:18.884388] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 57905 has claimed it. 00:05:20.515 [2024-11-19 14:06:18.884433] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:21.091 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57921) - No such process 00:05:21.091 ERROR: process (pid: 57921) is no longer running 00:05:21.091 14:06:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.091 14:06:19 -- common/autotest_common.sh@862 -- # return 1 00:05:21.091 14:06:19 -- common/autotest_common.sh@653 -- # es=1 00:05:21.091 14:06:19 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:21.091 14:06:19 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:21.091 14:06:19 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:21.091 14:06:19 -- event/cpu_locks.sh@122 -- # locks_exist 57905 00:05:21.091 14:06:19 -- event/cpu_locks.sh@22 -- # lslocks -p 57905 00:05:21.091 14:06:19 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:21.091 14:06:19 -- event/cpu_locks.sh@124 -- # killprocess 57905 00:05:21.091 14:06:19 -- common/autotest_common.sh@936 -- # '[' -z 57905 ']' 00:05:21.091 14:06:19 -- common/autotest_common.sh@940 -- # kill -0 57905 00:05:21.091 14:06:19 -- common/autotest_common.sh@941 -- # uname 00:05:21.091 14:06:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:21.091 14:06:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57905 00:05:21.091 14:06:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:21.091 14:06:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:21.091 killing process with pid 57905 00:05:21.091 14:06:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57905' 00:05:21.091 14:06:19 -- common/autotest_common.sh@955 -- # kill 57905 00:05:21.091 14:06:19 -- common/autotest_common.sh@960 -- # wait 57905 00:05:22.467 00:05:22.467 real 0m3.067s 00:05:22.467 user 0m3.201s 00:05:22.467 sys 0m0.586s 00:05:22.467 14:06:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:22.467 14:06:20 -- common/autotest_common.sh@10 -- # set +x 00:05:22.467 ************************************ 00:05:22.467 END TEST locking_app_on_locked_coremask 00:05:22.467 ************************************ 00:05:22.467 14:06:20 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:22.467 14:06:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:22.467 14:06:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.467 14:06:20 -- common/autotest_common.sh@10 -- # set +x 00:05:22.467 ************************************ 00:05:22.467 START TEST locking_overlapped_coremask 00:05:22.467 ************************************ 00:05:22.467 14:06:20 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:05:22.467 14:06:20 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=57974 00:05:22.467 14:06:20 -- event/cpu_locks.sh@133 -- # waitforlisten 57974 /var/tmp/spdk.sock 00:05:22.467 14:06:20 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:22.467 14:06:20 -- common/autotest_common.sh@829 -- # '[' -z 57974 ']' 00:05:22.467 14:06:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.467 14:06:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.467 14:06:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.467 14:06:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.467 14:06:20 -- common/autotest_common.sh@10 -- # set +x 00:05:22.467 [2024-11-19 14:06:20.973605] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:22.467 [2024-11-19 14:06:20.974002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57974 ] 00:05:22.725 [2024-11-19 14:06:21.122201] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:22.983 [2024-11-19 14:06:21.298677] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:22.983 [2024-11-19 14:06:21.299054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.983 [2024-11-19 14:06:21.299147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.983 [2024-11-19 14:06:21.299165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.917 14:06:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.917 14:06:22 -- common/autotest_common.sh@862 -- # return 0 00:05:23.917 14:06:22 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58005 00:05:23.917 14:06:22 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58005 /var/tmp/spdk2.sock 00:05:23.917 14:06:22 -- common/autotest_common.sh@650 -- # local es=0 00:05:23.917 14:06:22 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58005 /var/tmp/spdk2.sock 00:05:23.917 14:06:22 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:23.917 14:06:22 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:23.917 14:06:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:23.917 14:06:22 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:23.917 14:06:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:23.917 14:06:22 -- common/autotest_common.sh@653 -- # waitforlisten 58005 /var/tmp/spdk2.sock 00:05:23.917 14:06:22 -- common/autotest_common.sh@829 -- # '[' -z 58005 ']' 00:05:23.917 14:06:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:23.917 14:06:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:23.917 14:06:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:23.917 14:06:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.917 14:06:22 -- common/autotest_common.sh@10 -- # set +x 00:05:24.176 [2024-11-19 14:06:22.532157] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:24.176 [2024-11-19 14:06:22.532269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58005 ] 00:05:24.176 [2024-11-19 14:06:22.679003] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 57974 has claimed it. 00:05:24.176 [2024-11-19 14:06:22.679047] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:24.741 ERROR: process (pid: 58005) is no longer running 00:05:24.741 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (58005) - No such process 00:05:24.741 14:06:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.741 14:06:23 -- common/autotest_common.sh@862 -- # return 1 00:05:24.741 14:06:23 -- common/autotest_common.sh@653 -- # es=1 00:05:24.741 14:06:23 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:24.741 14:06:23 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:24.741 14:06:23 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:24.741 14:06:23 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:24.741 14:06:23 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:24.742 14:06:23 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:24.742 14:06:23 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:24.742 14:06:23 -- event/cpu_locks.sh@141 -- # killprocess 57974 00:05:24.742 14:06:23 -- common/autotest_common.sh@936 -- # '[' -z 57974 ']' 00:05:24.742 14:06:23 -- common/autotest_common.sh@940 -- # kill -0 57974 00:05:24.742 14:06:23 -- common/autotest_common.sh@941 -- # uname 00:05:24.742 14:06:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:24.742 14:06:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57974 00:05:24.742 14:06:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:24.742 14:06:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:24.742 killing process with pid 57974 00:05:24.742 14:06:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57974' 00:05:24.742 14:06:23 -- common/autotest_common.sh@955 -- # kill 57974 00:05:24.742 14:06:23 -- common/autotest_common.sh@960 -- # wait 57974 00:05:26.115 00:05:26.115 real 0m3.540s 00:05:26.115 user 0m9.526s 00:05:26.115 sys 0m0.505s 00:05:26.115 ************************************ 00:05:26.115 END TEST locking_overlapped_coremask 00:05:26.115 ************************************ 00:05:26.115 14:06:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:26.115 14:06:24 -- common/autotest_common.sh@10 -- # set +x 00:05:26.115 14:06:24 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:26.115 14:06:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:26.115 14:06:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:26.115 14:06:24 -- common/autotest_common.sh@10 -- # set +x 00:05:26.116 ************************************ 00:05:26.116 START TEST locking_overlapped_coremask_via_rpc 00:05:26.116 ************************************ 00:05:26.116 14:06:24 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:05:26.116 14:06:24 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58058 00:05:26.116 14:06:24 -- event/cpu_locks.sh@149 -- # waitforlisten 58058 /var/tmp/spdk.sock 00:05:26.116 14:06:24 -- common/autotest_common.sh@829 -- # '[' -z 58058 ']' 00:05:26.116 14:06:24 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:26.116 14:06:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.116 14:06:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.116 14:06:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.116 14:06:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.116 14:06:24 -- common/autotest_common.sh@10 -- # set +x 00:05:26.116 [2024-11-19 14:06:24.546481] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:26.116 [2024-11-19 14:06:24.546577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58058 ] 00:05:26.373 [2024-11-19 14:06:24.689115] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:26.373 [2024-11-19 14:06:24.689167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:26.373 [2024-11-19 14:06:24.854707] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:26.373 [2024-11-19 14:06:24.855032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.373 [2024-11-19 14:06:24.855306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.373 [2024-11-19 14:06:24.855251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.939 14:06:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.939 14:06:25 -- common/autotest_common.sh@862 -- # return 0 00:05:26.939 14:06:25 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58076 00:05:26.939 14:06:25 -- event/cpu_locks.sh@153 -- # waitforlisten 58076 /var/tmp/spdk2.sock 00:05:26.939 14:06:25 -- common/autotest_common.sh@829 -- # '[' -z 58076 ']' 00:05:26.939 14:06:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:26.939 14:06:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:26.939 14:06:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:26.939 14:06:25 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:26.939 14:06:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.939 14:06:25 -- common/autotest_common.sh@10 -- # set +x 00:05:26.939 [2024-11-19 14:06:25.441846] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:26.939 [2024-11-19 14:06:25.441973] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58076 ] 00:05:27.196 [2024-11-19 14:06:25.589012] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:27.196 [2024-11-19 14:06:25.589045] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:27.454 [2024-11-19 14:06:25.894563] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:27.454 [2024-11-19 14:06:25.895428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:27.454 [2024-11-19 14:06:25.901967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:27.454 [2024-11-19 14:06:25.901995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:28.388 14:06:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.388 14:06:26 -- common/autotest_common.sh@862 -- # return 0 00:05:28.388 14:06:26 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:28.388 14:06:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.388 14:06:26 -- common/autotest_common.sh@10 -- # set +x 00:05:28.388 14:06:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.388 14:06:26 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:28.388 14:06:26 -- common/autotest_common.sh@650 -- # local es=0 00:05:28.388 14:06:26 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:28.388 14:06:26 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:28.388 14:06:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:28.388 14:06:26 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:28.388 14:06:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:28.388 14:06:26 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:28.388 14:06:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.388 14:06:26 -- common/autotest_common.sh@10 -- # set +x 00:05:28.646 [2024-11-19 14:06:26.955022] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58058 has claimed it. 00:05:28.646 request: 00:05:28.646 { 00:05:28.646 "method": "framework_enable_cpumask_locks", 00:05:28.646 "req_id": 1 00:05:28.646 } 00:05:28.646 Got JSON-RPC error response 00:05:28.646 response: 00:05:28.646 { 00:05:28.646 "code": -32603, 00:05:28.646 "message": "Failed to claim CPU core: 2" 00:05:28.646 } 00:05:28.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.646 14:06:26 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:28.646 14:06:26 -- common/autotest_common.sh@653 -- # es=1 00:05:28.646 14:06:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:28.646 14:06:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:28.646 14:06:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:28.646 14:06:26 -- event/cpu_locks.sh@158 -- # waitforlisten 58058 /var/tmp/spdk.sock 00:05:28.646 14:06:26 -- common/autotest_common.sh@829 -- # '[' -z 58058 ']' 00:05:28.646 14:06:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.646 14:06:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.646 14:06:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.646 14:06:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.646 14:06:26 -- common/autotest_common.sh@10 -- # set +x 00:05:28.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:28.646 14:06:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.646 14:06:27 -- common/autotest_common.sh@862 -- # return 0 00:05:28.646 14:06:27 -- event/cpu_locks.sh@159 -- # waitforlisten 58076 /var/tmp/spdk2.sock 00:05:28.646 14:06:27 -- common/autotest_common.sh@829 -- # '[' -z 58076 ']' 00:05:28.646 14:06:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:28.646 14:06:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.646 14:06:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:28.646 14:06:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.646 14:06:27 -- common/autotest_common.sh@10 -- # set +x 00:05:28.904 ************************************ 00:05:28.904 END TEST locking_overlapped_coremask_via_rpc 00:05:28.904 ************************************ 00:05:28.904 14:06:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.904 14:06:27 -- common/autotest_common.sh@862 -- # return 0 00:05:28.904 14:06:27 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:28.904 14:06:27 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:28.904 14:06:27 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:28.904 14:06:27 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:28.904 00:05:28.904 real 0m2.871s 00:05:28.904 user 0m1.135s 00:05:28.904 sys 0m0.155s 00:05:28.904 14:06:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:28.904 14:06:27 -- common/autotest_common.sh@10 -- # set +x 00:05:28.904 14:06:27 -- event/cpu_locks.sh@174 -- # cleanup 00:05:28.904 14:06:27 -- event/cpu_locks.sh@15 -- # [[ -z 58058 ]] 00:05:28.904 14:06:27 -- event/cpu_locks.sh@15 -- # killprocess 58058 00:05:28.904 14:06:27 -- common/autotest_common.sh@936 -- # '[' -z 58058 ']' 00:05:28.904 14:06:27 -- common/autotest_common.sh@940 -- # kill -0 58058 00:05:28.904 14:06:27 -- common/autotest_common.sh@941 -- # uname 00:05:28.904 14:06:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:28.904 14:06:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58058 00:05:28.904 killing process with pid 58058 00:05:28.904 14:06:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:28.904 14:06:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:28.904 14:06:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58058' 00:05:28.904 14:06:27 -- common/autotest_common.sh@955 -- # kill 58058 00:05:28.904 14:06:27 -- common/autotest_common.sh@960 -- # wait 58058 00:05:30.278 14:06:28 -- event/cpu_locks.sh@16 -- # [[ -z 58076 ]] 00:05:30.278 14:06:28 -- event/cpu_locks.sh@16 -- # killprocess 58076 00:05:30.278 14:06:28 -- common/autotest_common.sh@936 -- # '[' -z 58076 ']' 00:05:30.278 14:06:28 -- common/autotest_common.sh@940 -- # kill -0 58076 00:05:30.278 14:06:28 -- common/autotest_common.sh@941 -- # uname 00:05:30.278 14:06:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:30.278 14:06:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58076 00:05:30.278 killing process with pid 58076 00:05:30.278 14:06:28 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:30.278 14:06:28 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:30.278 14:06:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58076' 00:05:30.278 14:06:28 -- common/autotest_common.sh@955 -- # kill 58076 00:05:30.278 14:06:28 -- common/autotest_common.sh@960 -- # wait 58076 00:05:31.655 14:06:29 -- event/cpu_locks.sh@18 -- # rm -f 00:05:31.655 Process with pid 58058 is not found 00:05:31.655 14:06:29 -- event/cpu_locks.sh@1 -- # cleanup 00:05:31.655 14:06:29 -- event/cpu_locks.sh@15 -- # [[ -z 58058 ]] 00:05:31.655 14:06:29 -- event/cpu_locks.sh@15 -- # killprocess 58058 00:05:31.655 14:06:29 -- common/autotest_common.sh@936 -- # '[' -z 58058 ']' 00:05:31.655 14:06:29 -- common/autotest_common.sh@940 -- # kill -0 58058 00:05:31.655 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (58058) - No such process 00:05:31.655 14:06:29 -- common/autotest_common.sh@963 -- # echo 'Process with pid 58058 is not found' 00:05:31.655 14:06:29 -- event/cpu_locks.sh@16 -- # [[ -z 58076 ]] 00:05:31.655 14:06:29 -- event/cpu_locks.sh@16 -- # killprocess 58076 00:05:31.655 14:06:29 -- common/autotest_common.sh@936 -- # '[' -z 58076 ']' 00:05:31.655 Process with pid 58076 is not found 00:05:31.655 14:06:29 -- common/autotest_common.sh@940 -- # kill -0 58076 00:05:31.655 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (58076) - No such process 00:05:31.655 14:06:29 -- common/autotest_common.sh@963 -- # echo 'Process with pid 58076 is not found' 00:05:31.655 14:06:29 -- event/cpu_locks.sh@18 -- # rm -f 00:05:31.655 ************************************ 00:05:31.655 END TEST cpu_locks 00:05:31.655 ************************************ 00:05:31.655 00:05:31.655 real 0m32.239s 00:05:31.655 user 0m54.413s 00:05:31.655 sys 0m4.852s 00:05:31.655 14:06:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:31.655 14:06:29 -- common/autotest_common.sh@10 -- # set +x 00:05:31.655 ************************************ 00:05:31.655 END TEST event 00:05:31.655 ************************************ 00:05:31.655 00:05:31.655 real 0m58.075s 00:05:31.655 user 1m44.079s 00:05:31.655 sys 0m7.646s 00:05:31.655 14:06:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:31.655 14:06:29 -- common/autotest_common.sh@10 -- # set +x 00:05:31.655 14:06:29 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:31.655 14:06:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:31.655 14:06:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.655 14:06:29 -- common/autotest_common.sh@10 -- # set +x 00:05:31.655 ************************************ 00:05:31.655 START TEST thread 00:05:31.655 ************************************ 00:05:31.655 14:06:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:31.655 * Looking for test storage... 00:05:31.655 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:31.655 14:06:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:31.655 14:06:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:31.655 14:06:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:31.655 14:06:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:31.655 14:06:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:31.655 14:06:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:31.655 14:06:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:31.655 14:06:30 -- scripts/common.sh@335 -- # IFS=.-: 00:05:31.655 14:06:30 -- scripts/common.sh@335 -- # read -ra ver1 00:05:31.655 14:06:30 -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.655 14:06:30 -- scripts/common.sh@336 -- # read -ra ver2 00:05:31.655 14:06:30 -- scripts/common.sh@337 -- # local 'op=<' 00:05:31.656 14:06:30 -- scripts/common.sh@339 -- # ver1_l=2 00:05:31.656 14:06:30 -- scripts/common.sh@340 -- # ver2_l=1 00:05:31.656 14:06:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:31.656 14:06:30 -- scripts/common.sh@343 -- # case "$op" in 00:05:31.656 14:06:30 -- scripts/common.sh@344 -- # : 1 00:05:31.656 14:06:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:31.656 14:06:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.656 14:06:30 -- scripts/common.sh@364 -- # decimal 1 00:05:31.656 14:06:30 -- scripts/common.sh@352 -- # local d=1 00:05:31.656 14:06:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.656 14:06:30 -- scripts/common.sh@354 -- # echo 1 00:05:31.656 14:06:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:31.656 14:06:30 -- scripts/common.sh@365 -- # decimal 2 00:05:31.656 14:06:30 -- scripts/common.sh@352 -- # local d=2 00:05:31.656 14:06:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.656 14:06:30 -- scripts/common.sh@354 -- # echo 2 00:05:31.656 14:06:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:31.656 14:06:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:31.656 14:06:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:31.656 14:06:30 -- scripts/common.sh@367 -- # return 0 00:05:31.656 14:06:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.656 14:06:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:31.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.656 --rc genhtml_branch_coverage=1 00:05:31.656 --rc genhtml_function_coverage=1 00:05:31.656 --rc genhtml_legend=1 00:05:31.656 --rc geninfo_all_blocks=1 00:05:31.656 --rc geninfo_unexecuted_blocks=1 00:05:31.656 00:05:31.656 ' 00:05:31.656 14:06:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:31.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.656 --rc genhtml_branch_coverage=1 00:05:31.656 --rc genhtml_function_coverage=1 00:05:31.656 --rc genhtml_legend=1 00:05:31.656 --rc geninfo_all_blocks=1 00:05:31.656 --rc geninfo_unexecuted_blocks=1 00:05:31.656 00:05:31.656 ' 00:05:31.656 14:06:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:31.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.656 --rc genhtml_branch_coverage=1 00:05:31.656 --rc genhtml_function_coverage=1 00:05:31.656 --rc genhtml_legend=1 00:05:31.656 --rc geninfo_all_blocks=1 00:05:31.656 --rc geninfo_unexecuted_blocks=1 00:05:31.656 00:05:31.656 ' 00:05:31.656 14:06:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:31.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.656 --rc genhtml_branch_coverage=1 00:05:31.656 --rc genhtml_function_coverage=1 00:05:31.656 --rc genhtml_legend=1 00:05:31.656 --rc geninfo_all_blocks=1 00:05:31.656 --rc geninfo_unexecuted_blocks=1 00:05:31.656 00:05:31.656 ' 00:05:31.656 14:06:30 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:31.656 14:06:30 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:31.656 14:06:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.656 14:06:30 -- common/autotest_common.sh@10 -- # set +x 00:05:31.656 ************************************ 00:05:31.656 START TEST thread_poller_perf 00:05:31.656 ************************************ 00:05:31.656 14:06:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:31.656 [2024-11-19 14:06:30.145093] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:31.656 [2024-11-19 14:06:30.145255] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58226 ] 00:05:31.913 [2024-11-19 14:06:30.289001] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.913 [2024-11-19 14:06:30.461458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.913 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:33.289 [2024-11-19T14:06:31.851Z] ====================================== 00:05:33.289 [2024-11-19T14:06:31.851Z] busy:2614152634 (cyc) 00:05:33.289 [2024-11-19T14:06:31.851Z] total_run_count: 386000 00:05:33.289 [2024-11-19T14:06:31.851Z] tsc_hz: 2600000000 (cyc) 00:05:33.289 [2024-11-19T14:06:31.851Z] ====================================== 00:05:33.289 [2024-11-19T14:06:31.851Z] poller_cost: 6772 (cyc), 2604 (nsec) 00:05:33.289 ************************************ 00:05:33.289 END TEST thread_poller_perf 00:05:33.289 ************************************ 00:05:33.289 00:05:33.289 real 0m1.573s 00:05:33.289 user 0m1.384s 00:05:33.289 sys 0m0.081s 00:05:33.289 14:06:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:33.289 14:06:31 -- common/autotest_common.sh@10 -- # set +x 00:05:33.289 14:06:31 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:33.289 14:06:31 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:33.289 14:06:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.289 14:06:31 -- common/autotest_common.sh@10 -- # set +x 00:05:33.289 ************************************ 00:05:33.289 START TEST thread_poller_perf 00:05:33.289 ************************************ 00:05:33.289 14:06:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:33.289 [2024-11-19 14:06:31.755808] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:33.289 [2024-11-19 14:06:31.755987] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58268 ] 00:05:33.548 [2024-11-19 14:06:31.897908] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.548 [2024-11-19 14:06:32.068268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.548 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:35.016 [2024-11-19T14:06:33.578Z] ====================================== 00:05:35.016 [2024-11-19T14:06:33.578Z] busy:2603972704 (cyc) 00:05:35.016 [2024-11-19T14:06:33.578Z] total_run_count: 5333000 00:05:35.016 [2024-11-19T14:06:33.578Z] tsc_hz: 2600000000 (cyc) 00:05:35.016 [2024-11-19T14:06:33.578Z] ====================================== 00:05:35.016 [2024-11-19T14:06:33.578Z] poller_cost: 488 (cyc), 187 (nsec) 00:05:35.016 00:05:35.016 real 0m1.566s 00:05:35.016 user 0m1.378s 00:05:35.016 sys 0m0.081s 00:05:35.016 14:06:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:35.016 14:06:33 -- common/autotest_common.sh@10 -- # set +x 00:05:35.016 ************************************ 00:05:35.016 END TEST thread_poller_perf 00:05:35.016 ************************************ 00:05:35.016 14:06:33 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:35.016 00:05:35.016 real 0m3.359s 00:05:35.016 user 0m2.867s 00:05:35.016 sys 0m0.279s 00:05:35.016 14:06:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:35.016 14:06:33 -- common/autotest_common.sh@10 -- # set +x 00:05:35.016 ************************************ 00:05:35.016 END TEST thread 00:05:35.016 ************************************ 00:05:35.016 14:06:33 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:35.016 14:06:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:35.016 14:06:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.016 14:06:33 -- common/autotest_common.sh@10 -- # set +x 00:05:35.016 ************************************ 00:05:35.016 START TEST accel 00:05:35.016 ************************************ 00:05:35.016 14:06:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:35.016 * Looking for test storage... 00:05:35.016 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:35.016 14:06:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:35.016 14:06:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:35.016 14:06:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:35.016 14:06:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:35.016 14:06:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:35.016 14:06:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:35.016 14:06:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:35.016 14:06:33 -- scripts/common.sh@335 -- # IFS=.-: 00:05:35.016 14:06:33 -- scripts/common.sh@335 -- # read -ra ver1 00:05:35.016 14:06:33 -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.016 14:06:33 -- scripts/common.sh@336 -- # read -ra ver2 00:05:35.016 14:06:33 -- scripts/common.sh@337 -- # local 'op=<' 00:05:35.016 14:06:33 -- scripts/common.sh@339 -- # ver1_l=2 00:05:35.016 14:06:33 -- scripts/common.sh@340 -- # ver2_l=1 00:05:35.016 14:06:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:35.016 14:06:33 -- scripts/common.sh@343 -- # case "$op" in 00:05:35.016 14:06:33 -- scripts/common.sh@344 -- # : 1 00:05:35.016 14:06:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:35.016 14:06:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.016 14:06:33 -- scripts/common.sh@364 -- # decimal 1 00:05:35.016 14:06:33 -- scripts/common.sh@352 -- # local d=1 00:05:35.016 14:06:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.016 14:06:33 -- scripts/common.sh@354 -- # echo 1 00:05:35.016 14:06:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:35.016 14:06:33 -- scripts/common.sh@365 -- # decimal 2 00:05:35.016 14:06:33 -- scripts/common.sh@352 -- # local d=2 00:05:35.016 14:06:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.016 14:06:33 -- scripts/common.sh@354 -- # echo 2 00:05:35.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.016 14:06:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:35.016 14:06:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:35.016 14:06:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:35.016 14:06:33 -- scripts/common.sh@367 -- # return 0 00:05:35.016 14:06:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.016 14:06:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:35.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.016 --rc genhtml_branch_coverage=1 00:05:35.016 --rc genhtml_function_coverage=1 00:05:35.016 --rc genhtml_legend=1 00:05:35.016 --rc geninfo_all_blocks=1 00:05:35.016 --rc geninfo_unexecuted_blocks=1 00:05:35.016 00:05:35.016 ' 00:05:35.016 14:06:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:35.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.016 --rc genhtml_branch_coverage=1 00:05:35.016 --rc genhtml_function_coverage=1 00:05:35.016 --rc genhtml_legend=1 00:05:35.016 --rc geninfo_all_blocks=1 00:05:35.016 --rc geninfo_unexecuted_blocks=1 00:05:35.016 00:05:35.016 ' 00:05:35.016 14:06:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:35.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.016 --rc genhtml_branch_coverage=1 00:05:35.016 --rc genhtml_function_coverage=1 00:05:35.016 --rc genhtml_legend=1 00:05:35.016 --rc geninfo_all_blocks=1 00:05:35.016 --rc geninfo_unexecuted_blocks=1 00:05:35.016 00:05:35.016 ' 00:05:35.016 14:06:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:35.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.016 --rc genhtml_branch_coverage=1 00:05:35.016 --rc genhtml_function_coverage=1 00:05:35.016 --rc genhtml_legend=1 00:05:35.016 --rc geninfo_all_blocks=1 00:05:35.016 --rc geninfo_unexecuted_blocks=1 00:05:35.016 00:05:35.016 ' 00:05:35.016 14:06:33 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:05:35.016 14:06:33 -- accel/accel.sh@74 -- # get_expected_opcs 00:05:35.016 14:06:33 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:35.016 14:06:33 -- accel/accel.sh@59 -- # spdk_tgt_pid=58356 00:05:35.016 14:06:33 -- accel/accel.sh@60 -- # waitforlisten 58356 00:05:35.016 14:06:33 -- common/autotest_common.sh@829 -- # '[' -z 58356 ']' 00:05:35.016 14:06:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.016 14:06:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.016 14:06:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.016 14:06:33 -- accel/accel.sh@58 -- # build_accel_config 00:05:35.016 14:06:33 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:35.016 14:06:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.016 14:06:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:35.016 14:06:33 -- common/autotest_common.sh@10 -- # set +x 00:05:35.016 14:06:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.016 14:06:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.016 14:06:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:35.016 14:06:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:35.016 14:06:33 -- accel/accel.sh@41 -- # local IFS=, 00:05:35.016 14:06:33 -- accel/accel.sh@42 -- # jq -r . 00:05:35.016 [2024-11-19 14:06:33.561963] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:35.017 [2024-11-19 14:06:33.562198] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58356 ] 00:05:35.274 [2024-11-19 14:06:33.709826] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.533 [2024-11-19 14:06:33.895575] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:35.533 [2024-11-19 14:06:33.895931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.467 14:06:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.467 14:06:35 -- common/autotest_common.sh@862 -- # return 0 00:05:36.467 14:06:35 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:36.467 14:06:35 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:36.467 14:06:35 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:05:36.467 14:06:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.467 14:06:35 -- common/autotest_common.sh@10 -- # set +x 00:05:36.724 14:06:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.724 14:06:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.724 14:06:35 -- accel/accel.sh@64 -- # IFS== 00:05:36.724 14:06:35 -- accel/accel.sh@64 -- # read -r opc module 00:05:36.724 14:06:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:36.724 14:06:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.724 14:06:35 -- accel/accel.sh@64 -- # IFS== 00:05:36.724 14:06:35 -- accel/accel.sh@64 -- # read -r opc module 00:05:36.724 14:06:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:36.724 14:06:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.724 14:06:35 -- accel/accel.sh@64 -- # IFS== 00:05:36.724 14:06:35 -- accel/accel.sh@64 -- # read -r opc module 00:05:36.724 14:06:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:36.725 14:06:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.725 14:06:35 -- accel/accel.sh@64 -- # IFS== 00:05:36.725 14:06:35 -- accel/accel.sh@64 -- # read -r opc module 00:05:36.725 14:06:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:36.725 14:06:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.725 14:06:35 -- accel/accel.sh@64 -- # IFS== 00:05:36.725 14:06:35 -- accel/accel.sh@64 -- # read -r opc module 00:05:36.725 14:06:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:36.725 14:06:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.725 14:06:35 -- accel/accel.sh@64 -- # IFS== 00:05:36.725 14:06:35 -- accel/accel.sh@64 -- # read -r opc module 00:05:36.725 14:06:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:36.725 14:06:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.725 14:06:35 -- accel/accel.sh@64 -- # IFS== 00:05:36.725 14:06:35 -- accel/accel.sh@64 -- # read -r opc module 00:05:36.725 14:06:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:36.725 14:06:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.725 14:06:35 -- accel/accel.sh@64 -- # IFS== 00:05:36.725 14:06:35 -- accel/accel.sh@64 -- # read -r opc module 00:05:36.725 14:06:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:36.725 14:06:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.725 14:06:35 -- accel/accel.sh@64 -- # IFS== 00:05:36.725 14:06:35 -- accel/accel.sh@64 -- # read -r opc module 00:05:36.725 14:06:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:36.725 14:06:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.725 14:06:35 -- accel/accel.sh@64 -- # IFS== 00:05:36.725 14:06:35 -- accel/accel.sh@64 -- # read -r opc module 00:05:36.725 14:06:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:36.725 14:06:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.725 14:06:35 -- accel/accel.sh@64 -- # IFS== 00:05:36.725 14:06:35 -- accel/accel.sh@64 -- # read -r opc module 00:05:36.725 14:06:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:36.725 14:06:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.725 14:06:35 -- accel/accel.sh@64 -- # IFS== 00:05:36.725 14:06:35 -- accel/accel.sh@64 -- # read -r opc module 00:05:36.725 14:06:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:36.725 14:06:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.725 14:06:35 -- accel/accel.sh@64 -- # IFS== 00:05:36.725 14:06:35 -- accel/accel.sh@64 -- # read -r opc module 00:05:36.725 14:06:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:36.725 14:06:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.725 14:06:35 -- accel/accel.sh@64 -- # IFS== 00:05:36.725 14:06:35 -- accel/accel.sh@64 -- # read -r opc module 00:05:36.725 14:06:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:36.725 14:06:35 -- accel/accel.sh@67 -- # killprocess 58356 00:05:36.725 14:06:35 -- common/autotest_common.sh@936 -- # '[' -z 58356 ']' 00:05:36.725 14:06:35 -- common/autotest_common.sh@940 -- # kill -0 58356 00:05:36.725 14:06:35 -- common/autotest_common.sh@941 -- # uname 00:05:36.725 14:06:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:36.725 14:06:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58356 00:05:36.725 14:06:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:36.725 14:06:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:36.725 killing process with pid 58356 00:05:36.725 14:06:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58356' 00:05:36.725 14:06:35 -- common/autotest_common.sh@955 -- # kill 58356 00:05:36.725 14:06:35 -- common/autotest_common.sh@960 -- # wait 58356 00:05:38.099 14:06:36 -- accel/accel.sh@68 -- # trap - ERR 00:05:38.099 14:06:36 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:05:38.099 14:06:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:38.099 14:06:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.099 14:06:36 -- common/autotest_common.sh@10 -- # set +x 00:05:38.099 14:06:36 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:05:38.099 14:06:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:38.099 14:06:36 -- accel/accel.sh@12 -- # build_accel_config 00:05:38.099 14:06:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:38.099 14:06:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.099 14:06:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.099 14:06:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:38.099 14:06:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:38.099 14:06:36 -- accel/accel.sh@41 -- # local IFS=, 00:05:38.099 14:06:36 -- accel/accel.sh@42 -- # jq -r . 00:05:38.099 14:06:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:38.099 14:06:36 -- common/autotest_common.sh@10 -- # set +x 00:05:38.099 14:06:36 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:38.099 14:06:36 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:38.099 14:06:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.099 14:06:36 -- common/autotest_common.sh@10 -- # set +x 00:05:38.099 ************************************ 00:05:38.099 START TEST accel_missing_filename 00:05:38.099 ************************************ 00:05:38.099 14:06:36 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:05:38.099 14:06:36 -- common/autotest_common.sh@650 -- # local es=0 00:05:38.099 14:06:36 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:38.099 14:06:36 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:38.099 14:06:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:38.099 14:06:36 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:38.099 14:06:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:38.099 14:06:36 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:05:38.099 14:06:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:38.099 14:06:36 -- accel/accel.sh@12 -- # build_accel_config 00:05:38.099 14:06:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:38.099 14:06:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.099 14:06:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.099 14:06:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:38.099 14:06:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:38.099 14:06:36 -- accel/accel.sh@41 -- # local IFS=, 00:05:38.099 14:06:36 -- accel/accel.sh@42 -- # jq -r . 00:05:38.099 [2024-11-19 14:06:36.487024] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:38.099 [2024-11-19 14:06:36.487617] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58417 ] 00:05:38.099 [2024-11-19 14:06:36.634196] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.357 [2024-11-19 14:06:36.818130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.615 [2024-11-19 14:06:36.940412] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:38.876 [2024-11-19 14:06:37.213059] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:38.876 A filename is required. 00:05:39.135 14:06:37 -- common/autotest_common.sh@653 -- # es=234 00:05:39.135 14:06:37 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:39.135 14:06:37 -- common/autotest_common.sh@662 -- # es=106 00:05:39.135 14:06:37 -- common/autotest_common.sh@663 -- # case "$es" in 00:05:39.135 14:06:37 -- common/autotest_common.sh@670 -- # es=1 00:05:39.135 14:06:37 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:39.135 00:05:39.135 real 0m0.992s 00:05:39.135 user 0m0.786s 00:05:39.135 sys 0m0.132s 00:05:39.135 14:06:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:39.135 14:06:37 -- common/autotest_common.sh@10 -- # set +x 00:05:39.135 ************************************ 00:05:39.135 END TEST accel_missing_filename 00:05:39.135 ************************************ 00:05:39.135 14:06:37 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:39.135 14:06:37 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:39.135 14:06:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.135 14:06:37 -- common/autotest_common.sh@10 -- # set +x 00:05:39.135 ************************************ 00:05:39.135 START TEST accel_compress_verify 00:05:39.135 ************************************ 00:05:39.135 14:06:37 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:39.135 14:06:37 -- common/autotest_common.sh@650 -- # local es=0 00:05:39.135 14:06:37 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:39.135 14:06:37 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:39.135 14:06:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.135 14:06:37 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:39.135 14:06:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.135 14:06:37 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:39.135 14:06:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:39.135 14:06:37 -- accel/accel.sh@12 -- # build_accel_config 00:05:39.135 14:06:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:39.135 14:06:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.135 14:06:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.135 14:06:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:39.135 14:06:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:39.135 14:06:37 -- accel/accel.sh@41 -- # local IFS=, 00:05:39.135 14:06:37 -- accel/accel.sh@42 -- # jq -r . 00:05:39.135 [2024-11-19 14:06:37.516854] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:39.135 [2024-11-19 14:06:37.516951] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58448 ] 00:05:39.135 [2024-11-19 14:06:37.662257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.393 [2024-11-19 14:06:37.828726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.393 [2024-11-19 14:06:37.951058] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:39.962 [2024-11-19 14:06:38.222943] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:39.962 00:05:39.962 Compression does not support the verify option, aborting. 00:05:39.962 14:06:38 -- common/autotest_common.sh@653 -- # es=161 00:05:39.962 14:06:38 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:39.962 14:06:38 -- common/autotest_common.sh@662 -- # es=33 00:05:39.962 14:06:38 -- common/autotest_common.sh@663 -- # case "$es" in 00:05:39.962 14:06:38 -- common/autotest_common.sh@670 -- # es=1 00:05:39.962 14:06:38 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:39.962 00:05:39.962 real 0m0.969s 00:05:39.962 user 0m0.747s 00:05:39.962 sys 0m0.145s 00:05:39.962 ************************************ 00:05:39.962 END TEST accel_compress_verify 00:05:39.962 ************************************ 00:05:39.962 14:06:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:39.962 14:06:38 -- common/autotest_common.sh@10 -- # set +x 00:05:39.962 14:06:38 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:39.962 14:06:38 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:39.962 14:06:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.962 14:06:38 -- common/autotest_common.sh@10 -- # set +x 00:05:39.962 ************************************ 00:05:39.962 START TEST accel_wrong_workload 00:05:39.962 ************************************ 00:05:39.962 14:06:38 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:05:39.962 14:06:38 -- common/autotest_common.sh@650 -- # local es=0 00:05:39.962 14:06:38 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:39.962 14:06:38 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:39.962 14:06:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.962 14:06:38 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:39.963 14:06:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.963 14:06:38 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:05:39.963 14:06:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:39.963 14:06:38 -- accel/accel.sh@12 -- # build_accel_config 00:05:39.963 14:06:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:39.963 14:06:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.963 14:06:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.963 14:06:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:39.963 14:06:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:39.963 14:06:38 -- accel/accel.sh@41 -- # local IFS=, 00:05:39.963 14:06:38 -- accel/accel.sh@42 -- # jq -r . 00:05:40.224 Unsupported workload type: foobar 00:05:40.224 [2024-11-19 14:06:38.525138] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:40.224 accel_perf options: 00:05:40.224 [-h help message] 00:05:40.224 [-q queue depth per core] 00:05:40.224 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:40.224 [-T number of threads per core 00:05:40.224 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:40.224 [-t time in seconds] 00:05:40.224 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:40.224 [ dif_verify, , dif_generate, dif_generate_copy 00:05:40.224 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:40.224 [-l for compress/decompress workloads, name of uncompressed input file 00:05:40.224 [-S for crc32c workload, use this seed value (default 0) 00:05:40.224 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:40.224 [-f for fill workload, use this BYTE value (default 255) 00:05:40.224 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:40.224 [-y verify result if this switch is on] 00:05:40.224 [-a tasks to allocate per core (default: same value as -q)] 00:05:40.224 Can be used to spread operations across a wider range of memory. 00:05:40.224 14:06:38 -- common/autotest_common.sh@653 -- # es=1 00:05:40.224 14:06:38 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:40.224 14:06:38 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:40.224 14:06:38 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:40.224 00:05:40.224 real 0m0.057s 00:05:40.224 user 0m0.058s 00:05:40.224 sys 0m0.027s 00:05:40.224 ************************************ 00:05:40.224 END TEST accel_wrong_workload 00:05:40.224 ************************************ 00:05:40.224 14:06:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:40.224 14:06:38 -- common/autotest_common.sh@10 -- # set +x 00:05:40.224 14:06:38 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:40.224 14:06:38 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:40.224 14:06:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.224 14:06:38 -- common/autotest_common.sh@10 -- # set +x 00:05:40.224 ************************************ 00:05:40.224 START TEST accel_negative_buffers 00:05:40.224 ************************************ 00:05:40.224 14:06:38 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:40.224 14:06:38 -- common/autotest_common.sh@650 -- # local es=0 00:05:40.224 14:06:38 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:40.224 14:06:38 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:40.224 14:06:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:40.224 14:06:38 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:40.224 14:06:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:40.224 14:06:38 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:05:40.224 14:06:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:40.224 14:06:38 -- accel/accel.sh@12 -- # build_accel_config 00:05:40.224 14:06:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:40.224 14:06:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.224 14:06:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.224 14:06:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:40.224 14:06:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:40.224 14:06:38 -- accel/accel.sh@41 -- # local IFS=, 00:05:40.224 14:06:38 -- accel/accel.sh@42 -- # jq -r . 00:05:40.224 -x option must be non-negative. 00:05:40.225 [2024-11-19 14:06:38.614205] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:40.225 accel_perf options: 00:05:40.225 [-h help message] 00:05:40.225 [-q queue depth per core] 00:05:40.225 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:40.225 [-T number of threads per core 00:05:40.225 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:40.225 [-t time in seconds] 00:05:40.225 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:40.225 [ dif_verify, , dif_generate, dif_generate_copy 00:05:40.225 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:40.225 [-l for compress/decompress workloads, name of uncompressed input file 00:05:40.225 [-S for crc32c workload, use this seed value (default 0) 00:05:40.225 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:40.225 [-f for fill workload, use this BYTE value (default 255) 00:05:40.225 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:40.225 [-y verify result if this switch is on] 00:05:40.225 [-a tasks to allocate per core (default: same value as -q)] 00:05:40.225 Can be used to spread operations across a wider range of memory. 00:05:40.225 14:06:38 -- common/autotest_common.sh@653 -- # es=1 00:05:40.225 14:06:38 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:40.225 14:06:38 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:40.225 14:06:38 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:40.225 00:05:40.225 real 0m0.043s 00:05:40.225 user 0m0.045s 00:05:40.225 sys 0m0.025s 00:05:40.225 ************************************ 00:05:40.225 END TEST accel_negative_buffers 00:05:40.225 ************************************ 00:05:40.225 14:06:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:40.225 14:06:38 -- common/autotest_common.sh@10 -- # set +x 00:05:40.225 14:06:38 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:40.225 14:06:38 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:40.225 14:06:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.225 14:06:38 -- common/autotest_common.sh@10 -- # set +x 00:05:40.225 ************************************ 00:05:40.225 START TEST accel_crc32c 00:05:40.225 ************************************ 00:05:40.225 14:06:38 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:40.225 14:06:38 -- accel/accel.sh@16 -- # local accel_opc 00:05:40.225 14:06:38 -- accel/accel.sh@17 -- # local accel_module 00:05:40.225 14:06:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:40.225 14:06:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:40.225 14:06:38 -- accel/accel.sh@12 -- # build_accel_config 00:05:40.225 14:06:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:40.225 14:06:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.225 14:06:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.225 14:06:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:40.225 14:06:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:40.225 14:06:38 -- accel/accel.sh@41 -- # local IFS=, 00:05:40.225 14:06:38 -- accel/accel.sh@42 -- # jq -r . 00:05:40.225 [2024-11-19 14:06:38.696664] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:40.225 [2024-11-19 14:06:38.696757] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58526 ] 00:05:40.484 [2024-11-19 14:06:38.844961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.484 [2024-11-19 14:06:39.027931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.385 14:06:40 -- accel/accel.sh@18 -- # out=' 00:05:42.385 SPDK Configuration: 00:05:42.385 Core mask: 0x1 00:05:42.385 00:05:42.385 Accel Perf Configuration: 00:05:42.385 Workload Type: crc32c 00:05:42.385 CRC-32C seed: 32 00:05:42.385 Transfer size: 4096 bytes 00:05:42.385 Vector count 1 00:05:42.385 Module: software 00:05:42.385 Queue depth: 32 00:05:42.385 Allocate depth: 32 00:05:42.385 # threads/core: 1 00:05:42.385 Run time: 1 seconds 00:05:42.385 Verify: Yes 00:05:42.385 00:05:42.385 Running for 1 seconds... 00:05:42.385 00:05:42.385 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:42.385 ------------------------------------------------------------------------------------ 00:05:42.385 0,0 600448/s 2345 MiB/s 0 0 00:05:42.385 ==================================================================================== 00:05:42.385 Total 600448/s 2345 MiB/s 0 0' 00:05:42.385 14:06:40 -- accel/accel.sh@20 -- # IFS=: 00:05:42.385 14:06:40 -- accel/accel.sh@20 -- # read -r var val 00:05:42.385 14:06:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:42.385 14:06:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:42.385 14:06:40 -- accel/accel.sh@12 -- # build_accel_config 00:05:42.385 14:06:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:42.385 14:06:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.385 14:06:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.385 14:06:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:42.385 14:06:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:42.385 14:06:40 -- accel/accel.sh@41 -- # local IFS=, 00:05:42.385 14:06:40 -- accel/accel.sh@42 -- # jq -r . 00:05:42.385 [2024-11-19 14:06:40.689208] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:42.385 [2024-11-19 14:06:40.689305] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58548 ] 00:05:42.385 [2024-11-19 14:06:40.834695] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.643 [2024-11-19 14:06:41.005162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.643 14:06:41 -- accel/accel.sh@21 -- # val= 00:05:42.643 14:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.643 14:06:41 -- accel/accel.sh@20 -- # IFS=: 00:05:42.643 14:06:41 -- accel/accel.sh@20 -- # read -r var val 00:05:42.643 14:06:41 -- accel/accel.sh@21 -- # val= 00:05:42.643 14:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.643 14:06:41 -- accel/accel.sh@20 -- # IFS=: 00:05:42.643 14:06:41 -- accel/accel.sh@20 -- # read -r var val 00:05:42.643 14:06:41 -- accel/accel.sh@21 -- # val=0x1 00:05:42.643 14:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.643 14:06:41 -- accel/accel.sh@20 -- # IFS=: 00:05:42.643 14:06:41 -- accel/accel.sh@20 -- # read -r var val 00:05:42.643 14:06:41 -- accel/accel.sh@21 -- # val= 00:05:42.643 14:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.643 14:06:41 -- accel/accel.sh@20 -- # IFS=: 00:05:42.643 14:06:41 -- accel/accel.sh@20 -- # read -r var val 00:05:42.643 14:06:41 -- accel/accel.sh@21 -- # val= 00:05:42.643 14:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.643 14:06:41 -- accel/accel.sh@20 -- # IFS=: 00:05:42.643 14:06:41 -- accel/accel.sh@20 -- # read -r var val 00:05:42.643 14:06:41 -- accel/accel.sh@21 -- # val=crc32c 00:05:42.643 14:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.643 14:06:41 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:42.643 14:06:41 -- accel/accel.sh@20 -- # IFS=: 00:05:42.643 14:06:41 -- accel/accel.sh@20 -- # read -r var val 00:05:42.643 14:06:41 -- accel/accel.sh@21 -- # val=32 00:05:42.643 14:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.643 14:06:41 -- accel/accel.sh@20 -- # IFS=: 00:05:42.643 14:06:41 -- accel/accel.sh@20 -- # read -r var val 00:05:42.643 14:06:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:42.643 14:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.643 14:06:41 -- accel/accel.sh@20 -- # IFS=: 00:05:42.643 14:06:41 -- accel/accel.sh@20 -- # read -r var val 00:05:42.643 14:06:41 -- accel/accel.sh@21 -- # val= 00:05:42.643 14:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.643 14:06:41 -- accel/accel.sh@20 -- # IFS=: 00:05:42.643 14:06:41 -- accel/accel.sh@20 -- # read -r var val 00:05:42.643 14:06:41 -- accel/accel.sh@21 -- # val=software 00:05:42.643 14:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.643 14:06:41 -- accel/accel.sh@23 -- # accel_module=software 00:05:42.643 14:06:41 -- accel/accel.sh@20 -- # IFS=: 00:05:42.643 14:06:41 -- accel/accel.sh@20 -- # read -r var val 00:05:42.643 14:06:41 -- accel/accel.sh@21 -- # val=32 00:05:42.643 14:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.643 14:06:41 -- accel/accel.sh@20 -- # IFS=: 00:05:42.643 14:06:41 -- accel/accel.sh@20 -- # read -r var val 00:05:42.643 14:06:41 -- accel/accel.sh@21 -- # val=32 00:05:42.643 14:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.643 14:06:41 -- accel/accel.sh@20 -- # IFS=: 00:05:42.643 14:06:41 -- accel/accel.sh@20 -- # read -r var val 00:05:42.643 14:06:41 -- accel/accel.sh@21 -- # val=1 00:05:42.643 14:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.643 14:06:41 -- accel/accel.sh@20 -- # IFS=: 00:05:42.643 14:06:41 -- accel/accel.sh@20 -- # read -r var val 00:05:42.643 14:06:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:42.643 14:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.643 14:06:41 -- accel/accel.sh@20 -- # IFS=: 00:05:42.643 14:06:41 -- accel/accel.sh@20 -- # read -r var val 00:05:42.643 14:06:41 -- accel/accel.sh@21 -- # val=Yes 00:05:42.643 14:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.643 14:06:41 -- accel/accel.sh@20 -- # IFS=: 00:05:42.643 14:06:41 -- accel/accel.sh@20 -- # read -r var val 00:05:42.643 14:06:41 -- accel/accel.sh@21 -- # val= 00:05:42.643 14:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.643 14:06:41 -- accel/accel.sh@20 -- # IFS=: 00:05:42.643 14:06:41 -- accel/accel.sh@20 -- # read -r var val 00:05:42.643 14:06:41 -- accel/accel.sh@21 -- # val= 00:05:42.643 14:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.643 14:06:41 -- accel/accel.sh@20 -- # IFS=: 00:05:42.643 14:06:41 -- accel/accel.sh@20 -- # read -r var val 00:05:44.540 14:06:42 -- accel/accel.sh@21 -- # val= 00:05:44.540 14:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.540 14:06:42 -- accel/accel.sh@20 -- # IFS=: 00:05:44.540 14:06:42 -- accel/accel.sh@20 -- # read -r var val 00:05:44.540 14:06:42 -- accel/accel.sh@21 -- # val= 00:05:44.540 14:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.540 14:06:42 -- accel/accel.sh@20 -- # IFS=: 00:05:44.540 14:06:42 -- accel/accel.sh@20 -- # read -r var val 00:05:44.540 14:06:42 -- accel/accel.sh@21 -- # val= 00:05:44.540 14:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.540 14:06:42 -- accel/accel.sh@20 -- # IFS=: 00:05:44.540 14:06:42 -- accel/accel.sh@20 -- # read -r var val 00:05:44.540 14:06:42 -- accel/accel.sh@21 -- # val= 00:05:44.540 14:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.540 14:06:42 -- accel/accel.sh@20 -- # IFS=: 00:05:44.540 14:06:42 -- accel/accel.sh@20 -- # read -r var val 00:05:44.540 14:06:42 -- accel/accel.sh@21 -- # val= 00:05:44.540 14:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.540 14:06:42 -- accel/accel.sh@20 -- # IFS=: 00:05:44.540 14:06:42 -- accel/accel.sh@20 -- # read -r var val 00:05:44.540 14:06:42 -- accel/accel.sh@21 -- # val= 00:05:44.540 14:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.540 14:06:42 -- accel/accel.sh@20 -- # IFS=: 00:05:44.540 14:06:42 -- accel/accel.sh@20 -- # read -r var val 00:05:44.540 ************************************ 00:05:44.540 END TEST accel_crc32c 00:05:44.540 ************************************ 00:05:44.540 14:06:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:44.540 14:06:42 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:44.540 14:06:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.540 00:05:44.540 real 0m3.979s 00:05:44.540 user 0m3.514s 00:05:44.540 sys 0m0.260s 00:05:44.541 14:06:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:44.541 14:06:42 -- common/autotest_common.sh@10 -- # set +x 00:05:44.541 14:06:42 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:44.541 14:06:42 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:44.541 14:06:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.541 14:06:42 -- common/autotest_common.sh@10 -- # set +x 00:05:44.541 ************************************ 00:05:44.541 START TEST accel_crc32c_C2 00:05:44.541 ************************************ 00:05:44.541 14:06:42 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:44.541 14:06:42 -- accel/accel.sh@16 -- # local accel_opc 00:05:44.541 14:06:42 -- accel/accel.sh@17 -- # local accel_module 00:05:44.541 14:06:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:44.541 14:06:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:44.541 14:06:42 -- accel/accel.sh@12 -- # build_accel_config 00:05:44.541 14:06:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:44.541 14:06:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.541 14:06:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.541 14:06:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:44.541 14:06:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:44.541 14:06:42 -- accel/accel.sh@41 -- # local IFS=, 00:05:44.541 14:06:42 -- accel/accel.sh@42 -- # jq -r . 00:05:44.541 [2024-11-19 14:06:42.712232] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:44.541 [2024-11-19 14:06:42.712327] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58588 ] 00:05:44.541 [2024-11-19 14:06:42.858029] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.541 [2024-11-19 14:06:43.020568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.439 14:06:44 -- accel/accel.sh@18 -- # out=' 00:05:46.439 SPDK Configuration: 00:05:46.439 Core mask: 0x1 00:05:46.439 00:05:46.439 Accel Perf Configuration: 00:05:46.439 Workload Type: crc32c 00:05:46.439 CRC-32C seed: 0 00:05:46.439 Transfer size: 4096 bytes 00:05:46.439 Vector count 2 00:05:46.439 Module: software 00:05:46.439 Queue depth: 32 00:05:46.439 Allocate depth: 32 00:05:46.439 # threads/core: 1 00:05:46.439 Run time: 1 seconds 00:05:46.439 Verify: Yes 00:05:46.439 00:05:46.439 Running for 1 seconds... 00:05:46.439 00:05:46.439 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:46.439 ------------------------------------------------------------------------------------ 00:05:46.439 0,0 494592/s 3864 MiB/s 0 0 00:05:46.439 ==================================================================================== 00:05:46.439 Total 494592/s 1932 MiB/s 0 0' 00:05:46.439 14:06:44 -- accel/accel.sh@20 -- # IFS=: 00:05:46.439 14:06:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:46.439 14:06:44 -- accel/accel.sh@20 -- # read -r var val 00:05:46.439 14:06:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:46.439 14:06:44 -- accel/accel.sh@12 -- # build_accel_config 00:05:46.439 14:06:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:46.439 14:06:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.439 14:06:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.439 14:06:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:46.439 14:06:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:46.439 14:06:44 -- accel/accel.sh@41 -- # local IFS=, 00:05:46.439 14:06:44 -- accel/accel.sh@42 -- # jq -r . 00:05:46.439 [2024-11-19 14:06:44.693330] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:46.439 [2024-11-19 14:06:44.693790] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58608 ] 00:05:46.439 [2024-11-19 14:06:44.839003] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.697 [2024-11-19 14:06:45.003424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.697 14:06:45 -- accel/accel.sh@21 -- # val= 00:05:46.697 14:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.697 14:06:45 -- accel/accel.sh@20 -- # IFS=: 00:05:46.697 14:06:45 -- accel/accel.sh@20 -- # read -r var val 00:05:46.697 14:06:45 -- accel/accel.sh@21 -- # val= 00:05:46.697 14:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.697 14:06:45 -- accel/accel.sh@20 -- # IFS=: 00:05:46.697 14:06:45 -- accel/accel.sh@20 -- # read -r var val 00:05:46.697 14:06:45 -- accel/accel.sh@21 -- # val=0x1 00:05:46.697 14:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.697 14:06:45 -- accel/accel.sh@20 -- # IFS=: 00:05:46.697 14:06:45 -- accel/accel.sh@20 -- # read -r var val 00:05:46.697 14:06:45 -- accel/accel.sh@21 -- # val= 00:05:46.697 14:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.697 14:06:45 -- accel/accel.sh@20 -- # IFS=: 00:05:46.697 14:06:45 -- accel/accel.sh@20 -- # read -r var val 00:05:46.697 14:06:45 -- accel/accel.sh@21 -- # val= 00:05:46.697 14:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.697 14:06:45 -- accel/accel.sh@20 -- # IFS=: 00:05:46.697 14:06:45 -- accel/accel.sh@20 -- # read -r var val 00:05:46.697 14:06:45 -- accel/accel.sh@21 -- # val=crc32c 00:05:46.697 14:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.697 14:06:45 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:46.697 14:06:45 -- accel/accel.sh@20 -- # IFS=: 00:05:46.697 14:06:45 -- accel/accel.sh@20 -- # read -r var val 00:05:46.697 14:06:45 -- accel/accel.sh@21 -- # val=0 00:05:46.697 14:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.697 14:06:45 -- accel/accel.sh@20 -- # IFS=: 00:05:46.697 14:06:45 -- accel/accel.sh@20 -- # read -r var val 00:05:46.698 14:06:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:46.698 14:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.698 14:06:45 -- accel/accel.sh@20 -- # IFS=: 00:05:46.698 14:06:45 -- accel/accel.sh@20 -- # read -r var val 00:05:46.698 14:06:45 -- accel/accel.sh@21 -- # val= 00:05:46.698 14:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.698 14:06:45 -- accel/accel.sh@20 -- # IFS=: 00:05:46.698 14:06:45 -- accel/accel.sh@20 -- # read -r var val 00:05:46.698 14:06:45 -- accel/accel.sh@21 -- # val=software 00:05:46.698 14:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.698 14:06:45 -- accel/accel.sh@23 -- # accel_module=software 00:05:46.698 14:06:45 -- accel/accel.sh@20 -- # IFS=: 00:05:46.698 14:06:45 -- accel/accel.sh@20 -- # read -r var val 00:05:46.698 14:06:45 -- accel/accel.sh@21 -- # val=32 00:05:46.698 14:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.698 14:06:45 -- accel/accel.sh@20 -- # IFS=: 00:05:46.698 14:06:45 -- accel/accel.sh@20 -- # read -r var val 00:05:46.698 14:06:45 -- accel/accel.sh@21 -- # val=32 00:05:46.698 14:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.698 14:06:45 -- accel/accel.sh@20 -- # IFS=: 00:05:46.698 14:06:45 -- accel/accel.sh@20 -- # read -r var val 00:05:46.698 14:06:45 -- accel/accel.sh@21 -- # val=1 00:05:46.698 14:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.698 14:06:45 -- accel/accel.sh@20 -- # IFS=: 00:05:46.698 14:06:45 -- accel/accel.sh@20 -- # read -r var val 00:05:46.698 14:06:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:46.698 14:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.698 14:06:45 -- accel/accel.sh@20 -- # IFS=: 00:05:46.698 14:06:45 -- accel/accel.sh@20 -- # read -r var val 00:05:46.698 14:06:45 -- accel/accel.sh@21 -- # val=Yes 00:05:46.698 14:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.698 14:06:45 -- accel/accel.sh@20 -- # IFS=: 00:05:46.698 14:06:45 -- accel/accel.sh@20 -- # read -r var val 00:05:46.698 14:06:45 -- accel/accel.sh@21 -- # val= 00:05:46.698 14:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.698 14:06:45 -- accel/accel.sh@20 -- # IFS=: 00:05:46.698 14:06:45 -- accel/accel.sh@20 -- # read -r var val 00:05:46.698 14:06:45 -- accel/accel.sh@21 -- # val= 00:05:46.698 14:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.698 14:06:45 -- accel/accel.sh@20 -- # IFS=: 00:05:46.698 14:06:45 -- accel/accel.sh@20 -- # read -r var val 00:05:48.071 14:06:46 -- accel/accel.sh@21 -- # val= 00:05:48.329 14:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.329 14:06:46 -- accel/accel.sh@20 -- # IFS=: 00:05:48.329 14:06:46 -- accel/accel.sh@20 -- # read -r var val 00:05:48.329 14:06:46 -- accel/accel.sh@21 -- # val= 00:05:48.329 14:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.329 14:06:46 -- accel/accel.sh@20 -- # IFS=: 00:05:48.329 14:06:46 -- accel/accel.sh@20 -- # read -r var val 00:05:48.329 14:06:46 -- accel/accel.sh@21 -- # val= 00:05:48.329 14:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.329 14:06:46 -- accel/accel.sh@20 -- # IFS=: 00:05:48.329 14:06:46 -- accel/accel.sh@20 -- # read -r var val 00:05:48.329 14:06:46 -- accel/accel.sh@21 -- # val= 00:05:48.329 14:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.329 14:06:46 -- accel/accel.sh@20 -- # IFS=: 00:05:48.329 14:06:46 -- accel/accel.sh@20 -- # read -r var val 00:05:48.329 14:06:46 -- accel/accel.sh@21 -- # val= 00:05:48.329 14:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.329 14:06:46 -- accel/accel.sh@20 -- # IFS=: 00:05:48.329 14:06:46 -- accel/accel.sh@20 -- # read -r var val 00:05:48.329 14:06:46 -- accel/accel.sh@21 -- # val= 00:05:48.329 14:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.329 14:06:46 -- accel/accel.sh@20 -- # IFS=: 00:05:48.329 14:06:46 -- accel/accel.sh@20 -- # read -r var val 00:05:48.329 14:06:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:48.329 14:06:46 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:48.329 14:06:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.329 00:05:48.329 real 0m3.965s 00:05:48.329 user 0m3.494s 00:05:48.329 sys 0m0.268s 00:05:48.329 14:06:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:48.329 14:06:46 -- common/autotest_common.sh@10 -- # set +x 00:05:48.329 ************************************ 00:05:48.329 END TEST accel_crc32c_C2 00:05:48.329 ************************************ 00:05:48.329 14:06:46 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:48.329 14:06:46 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:48.329 14:06:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:48.329 14:06:46 -- common/autotest_common.sh@10 -- # set +x 00:05:48.329 ************************************ 00:05:48.329 START TEST accel_copy 00:05:48.329 ************************************ 00:05:48.329 14:06:46 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:05:48.329 14:06:46 -- accel/accel.sh@16 -- # local accel_opc 00:05:48.329 14:06:46 -- accel/accel.sh@17 -- # local accel_module 00:05:48.329 14:06:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:05:48.329 14:06:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:48.329 14:06:46 -- accel/accel.sh@12 -- # build_accel_config 00:05:48.329 14:06:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:48.329 14:06:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.329 14:06:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.329 14:06:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:48.329 14:06:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:48.329 14:06:46 -- accel/accel.sh@41 -- # local IFS=, 00:05:48.329 14:06:46 -- accel/accel.sh@42 -- # jq -r . 00:05:48.329 [2024-11-19 14:06:46.739026] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:48.329 [2024-11-19 14:06:46.739141] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58650 ] 00:05:48.329 [2024-11-19 14:06:46.886276] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.587 [2024-11-19 14:06:47.056622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.484 14:06:48 -- accel/accel.sh@18 -- # out=' 00:05:50.484 SPDK Configuration: 00:05:50.484 Core mask: 0x1 00:05:50.484 00:05:50.484 Accel Perf Configuration: 00:05:50.484 Workload Type: copy 00:05:50.484 Transfer size: 4096 bytes 00:05:50.484 Vector count 1 00:05:50.484 Module: software 00:05:50.484 Queue depth: 32 00:05:50.484 Allocate depth: 32 00:05:50.484 # threads/core: 1 00:05:50.484 Run time: 1 seconds 00:05:50.484 Verify: Yes 00:05:50.484 00:05:50.484 Running for 1 seconds... 00:05:50.484 00:05:50.485 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:50.485 ------------------------------------------------------------------------------------ 00:05:50.485 0,0 341344/s 1333 MiB/s 0 0 00:05:50.485 ==================================================================================== 00:05:50.485 Total 341344/s 1333 MiB/s 0 0' 00:05:50.485 14:06:48 -- accel/accel.sh@20 -- # IFS=: 00:05:50.485 14:06:48 -- accel/accel.sh@20 -- # read -r var val 00:05:50.485 14:06:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:50.485 14:06:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:50.485 14:06:48 -- accel/accel.sh@12 -- # build_accel_config 00:05:50.485 14:06:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:50.485 14:06:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.485 14:06:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.485 14:06:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:50.485 14:06:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:50.485 14:06:48 -- accel/accel.sh@41 -- # local IFS=, 00:05:50.485 14:06:48 -- accel/accel.sh@42 -- # jq -r . 00:05:50.485 [2024-11-19 14:06:48.832777] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:50.485 [2024-11-19 14:06:48.832913] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58681 ] 00:05:50.485 [2024-11-19 14:06:48.978692] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.742 [2024-11-19 14:06:49.163604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.005 14:06:49 -- accel/accel.sh@21 -- # val= 00:05:51.005 14:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.005 14:06:49 -- accel/accel.sh@20 -- # IFS=: 00:05:51.005 14:06:49 -- accel/accel.sh@20 -- # read -r var val 00:05:51.005 14:06:49 -- accel/accel.sh@21 -- # val= 00:05:51.005 14:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.005 14:06:49 -- accel/accel.sh@20 -- # IFS=: 00:05:51.005 14:06:49 -- accel/accel.sh@20 -- # read -r var val 00:05:51.005 14:06:49 -- accel/accel.sh@21 -- # val=0x1 00:05:51.005 14:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.005 14:06:49 -- accel/accel.sh@20 -- # IFS=: 00:05:51.005 14:06:49 -- accel/accel.sh@20 -- # read -r var val 00:05:51.005 14:06:49 -- accel/accel.sh@21 -- # val= 00:05:51.005 14:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.005 14:06:49 -- accel/accel.sh@20 -- # IFS=: 00:05:51.005 14:06:49 -- accel/accel.sh@20 -- # read -r var val 00:05:51.005 14:06:49 -- accel/accel.sh@21 -- # val= 00:05:51.005 14:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.005 14:06:49 -- accel/accel.sh@20 -- # IFS=: 00:05:51.005 14:06:49 -- accel/accel.sh@20 -- # read -r var val 00:05:51.005 14:06:49 -- accel/accel.sh@21 -- # val=copy 00:05:51.005 14:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.005 14:06:49 -- accel/accel.sh@24 -- # accel_opc=copy 00:05:51.005 14:06:49 -- accel/accel.sh@20 -- # IFS=: 00:05:51.005 14:06:49 -- accel/accel.sh@20 -- # read -r var val 00:05:51.005 14:06:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:51.005 14:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.005 14:06:49 -- accel/accel.sh@20 -- # IFS=: 00:05:51.005 14:06:49 -- accel/accel.sh@20 -- # read -r var val 00:05:51.005 14:06:49 -- accel/accel.sh@21 -- # val= 00:05:51.005 14:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.005 14:06:49 -- accel/accel.sh@20 -- # IFS=: 00:05:51.005 14:06:49 -- accel/accel.sh@20 -- # read -r var val 00:05:51.005 14:06:49 -- accel/accel.sh@21 -- # val=software 00:05:51.005 14:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.005 14:06:49 -- accel/accel.sh@23 -- # accel_module=software 00:05:51.005 14:06:49 -- accel/accel.sh@20 -- # IFS=: 00:05:51.005 14:06:49 -- accel/accel.sh@20 -- # read -r var val 00:05:51.005 14:06:49 -- accel/accel.sh@21 -- # val=32 00:05:51.005 14:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.005 14:06:49 -- accel/accel.sh@20 -- # IFS=: 00:05:51.005 14:06:49 -- accel/accel.sh@20 -- # read -r var val 00:05:51.005 14:06:49 -- accel/accel.sh@21 -- # val=32 00:05:51.005 14:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.005 14:06:49 -- accel/accel.sh@20 -- # IFS=: 00:05:51.005 14:06:49 -- accel/accel.sh@20 -- # read -r var val 00:05:51.005 14:06:49 -- accel/accel.sh@21 -- # val=1 00:05:51.005 14:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.005 14:06:49 -- accel/accel.sh@20 -- # IFS=: 00:05:51.005 14:06:49 -- accel/accel.sh@20 -- # read -r var val 00:05:51.005 14:06:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:51.005 14:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.005 14:06:49 -- accel/accel.sh@20 -- # IFS=: 00:05:51.005 14:06:49 -- accel/accel.sh@20 -- # read -r var val 00:05:51.005 14:06:49 -- accel/accel.sh@21 -- # val=Yes 00:05:51.005 14:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.005 14:06:49 -- accel/accel.sh@20 -- # IFS=: 00:05:51.005 14:06:49 -- accel/accel.sh@20 -- # read -r var val 00:05:51.005 14:06:49 -- accel/accel.sh@21 -- # val= 00:05:51.005 14:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.005 14:06:49 -- accel/accel.sh@20 -- # IFS=: 00:05:51.005 14:06:49 -- accel/accel.sh@20 -- # read -r var val 00:05:51.005 14:06:49 -- accel/accel.sh@21 -- # val= 00:05:51.005 14:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.005 14:06:49 -- accel/accel.sh@20 -- # IFS=: 00:05:51.005 14:06:49 -- accel/accel.sh@20 -- # read -r var val 00:05:52.412 14:06:50 -- accel/accel.sh@21 -- # val= 00:05:52.413 14:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.413 14:06:50 -- accel/accel.sh@20 -- # IFS=: 00:05:52.413 14:06:50 -- accel/accel.sh@20 -- # read -r var val 00:05:52.413 14:06:50 -- accel/accel.sh@21 -- # val= 00:05:52.413 14:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.413 14:06:50 -- accel/accel.sh@20 -- # IFS=: 00:05:52.413 14:06:50 -- accel/accel.sh@20 -- # read -r var val 00:05:52.413 14:06:50 -- accel/accel.sh@21 -- # val= 00:05:52.413 14:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.413 14:06:50 -- accel/accel.sh@20 -- # IFS=: 00:05:52.413 14:06:50 -- accel/accel.sh@20 -- # read -r var val 00:05:52.413 14:06:50 -- accel/accel.sh@21 -- # val= 00:05:52.413 14:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.413 14:06:50 -- accel/accel.sh@20 -- # IFS=: 00:05:52.413 14:06:50 -- accel/accel.sh@20 -- # read -r var val 00:05:52.413 14:06:50 -- accel/accel.sh@21 -- # val= 00:05:52.413 14:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.413 14:06:50 -- accel/accel.sh@20 -- # IFS=: 00:05:52.413 14:06:50 -- accel/accel.sh@20 -- # read -r var val 00:05:52.413 14:06:50 -- accel/accel.sh@21 -- # val= 00:05:52.413 14:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.413 14:06:50 -- accel/accel.sh@20 -- # IFS=: 00:05:52.413 14:06:50 -- accel/accel.sh@20 -- # read -r var val 00:05:52.413 14:06:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:52.413 14:06:50 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:05:52.413 14:06:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.413 00:05:52.413 real 0m4.091s 00:05:52.413 user 0m3.607s 00:05:52.413 sys 0m0.275s 00:05:52.413 14:06:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:52.413 ************************************ 00:05:52.413 END TEST accel_copy 00:05:52.413 ************************************ 00:05:52.413 14:06:50 -- common/autotest_common.sh@10 -- # set +x 00:05:52.413 14:06:50 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:52.413 14:06:50 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:52.413 14:06:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:52.413 14:06:50 -- common/autotest_common.sh@10 -- # set +x 00:05:52.413 ************************************ 00:05:52.413 START TEST accel_fill 00:05:52.413 ************************************ 00:05:52.413 14:06:50 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:52.413 14:06:50 -- accel/accel.sh@16 -- # local accel_opc 00:05:52.413 14:06:50 -- accel/accel.sh@17 -- # local accel_module 00:05:52.413 14:06:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:52.413 14:06:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:52.413 14:06:50 -- accel/accel.sh@12 -- # build_accel_config 00:05:52.413 14:06:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:52.413 14:06:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.413 14:06:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.413 14:06:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:52.413 14:06:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:52.413 14:06:50 -- accel/accel.sh@41 -- # local IFS=, 00:05:52.413 14:06:50 -- accel/accel.sh@42 -- # jq -r . 00:05:52.413 [2024-11-19 14:06:50.877345] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:52.413 [2024-11-19 14:06:50.877553] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58722 ] 00:05:52.671 [2024-11-19 14:06:51.025054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.671 [2024-11-19 14:06:51.175118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.574 14:06:52 -- accel/accel.sh@18 -- # out=' 00:05:54.574 SPDK Configuration: 00:05:54.574 Core mask: 0x1 00:05:54.574 00:05:54.574 Accel Perf Configuration: 00:05:54.574 Workload Type: fill 00:05:54.574 Fill pattern: 0x80 00:05:54.574 Transfer size: 4096 bytes 00:05:54.574 Vector count 1 00:05:54.574 Module: software 00:05:54.574 Queue depth: 64 00:05:54.574 Allocate depth: 64 00:05:54.574 # threads/core: 1 00:05:54.574 Run time: 1 seconds 00:05:54.574 Verify: Yes 00:05:54.574 00:05:54.574 Running for 1 seconds... 00:05:54.574 00:05:54.574 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:54.574 ------------------------------------------------------------------------------------ 00:05:54.574 0,0 598592/s 2338 MiB/s 0 0 00:05:54.574 ==================================================================================== 00:05:54.574 Total 598592/s 2338 MiB/s 0 0' 00:05:54.574 14:06:52 -- accel/accel.sh@20 -- # IFS=: 00:05:54.574 14:06:52 -- accel/accel.sh@20 -- # read -r var val 00:05:54.574 14:06:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:54.574 14:06:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:54.574 14:06:52 -- accel/accel.sh@12 -- # build_accel_config 00:05:54.574 14:06:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:54.574 14:06:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.574 14:06:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.574 14:06:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:54.574 14:06:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:54.574 14:06:52 -- accel/accel.sh@41 -- # local IFS=, 00:05:54.574 14:06:52 -- accel/accel.sh@42 -- # jq -r . 00:05:54.574 [2024-11-19 14:06:52.804459] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:54.574 [2024-11-19 14:06:52.804566] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58742 ] 00:05:54.574 [2024-11-19 14:06:52.950362] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.574 [2024-11-19 14:06:53.096286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.833 14:06:53 -- accel/accel.sh@21 -- # val= 00:05:54.833 14:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.833 14:06:53 -- accel/accel.sh@20 -- # IFS=: 00:05:54.833 14:06:53 -- accel/accel.sh@20 -- # read -r var val 00:05:54.833 14:06:53 -- accel/accel.sh@21 -- # val= 00:05:54.833 14:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.833 14:06:53 -- accel/accel.sh@20 -- # IFS=: 00:05:54.833 14:06:53 -- accel/accel.sh@20 -- # read -r var val 00:05:54.833 14:06:53 -- accel/accel.sh@21 -- # val=0x1 00:05:54.833 14:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.833 14:06:53 -- accel/accel.sh@20 -- # IFS=: 00:05:54.833 14:06:53 -- accel/accel.sh@20 -- # read -r var val 00:05:54.833 14:06:53 -- accel/accel.sh@21 -- # val= 00:05:54.833 14:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.833 14:06:53 -- accel/accel.sh@20 -- # IFS=: 00:05:54.833 14:06:53 -- accel/accel.sh@20 -- # read -r var val 00:05:54.833 14:06:53 -- accel/accel.sh@21 -- # val= 00:05:54.833 14:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.833 14:06:53 -- accel/accel.sh@20 -- # IFS=: 00:05:54.833 14:06:53 -- accel/accel.sh@20 -- # read -r var val 00:05:54.833 14:06:53 -- accel/accel.sh@21 -- # val=fill 00:05:54.833 14:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.833 14:06:53 -- accel/accel.sh@24 -- # accel_opc=fill 00:05:54.833 14:06:53 -- accel/accel.sh@20 -- # IFS=: 00:05:54.833 14:06:53 -- accel/accel.sh@20 -- # read -r var val 00:05:54.833 14:06:53 -- accel/accel.sh@21 -- # val=0x80 00:05:54.833 14:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.833 14:06:53 -- accel/accel.sh@20 -- # IFS=: 00:05:54.833 14:06:53 -- accel/accel.sh@20 -- # read -r var val 00:05:54.833 14:06:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:54.833 14:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.833 14:06:53 -- accel/accel.sh@20 -- # IFS=: 00:05:54.833 14:06:53 -- accel/accel.sh@20 -- # read -r var val 00:05:54.833 14:06:53 -- accel/accel.sh@21 -- # val= 00:05:54.833 14:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.833 14:06:53 -- accel/accel.sh@20 -- # IFS=: 00:05:54.833 14:06:53 -- accel/accel.sh@20 -- # read -r var val 00:05:54.833 14:06:53 -- accel/accel.sh@21 -- # val=software 00:05:54.833 14:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.833 14:06:53 -- accel/accel.sh@23 -- # accel_module=software 00:05:54.833 14:06:53 -- accel/accel.sh@20 -- # IFS=: 00:05:54.833 14:06:53 -- accel/accel.sh@20 -- # read -r var val 00:05:54.833 14:06:53 -- accel/accel.sh@21 -- # val=64 00:05:54.833 14:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.833 14:06:53 -- accel/accel.sh@20 -- # IFS=: 00:05:54.833 14:06:53 -- accel/accel.sh@20 -- # read -r var val 00:05:54.833 14:06:53 -- accel/accel.sh@21 -- # val=64 00:05:54.833 14:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.833 14:06:53 -- accel/accel.sh@20 -- # IFS=: 00:05:54.833 14:06:53 -- accel/accel.sh@20 -- # read -r var val 00:05:54.833 14:06:53 -- accel/accel.sh@21 -- # val=1 00:05:54.833 14:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.833 14:06:53 -- accel/accel.sh@20 -- # IFS=: 00:05:54.833 14:06:53 -- accel/accel.sh@20 -- # read -r var val 00:05:54.833 14:06:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:54.833 14:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.833 14:06:53 -- accel/accel.sh@20 -- # IFS=: 00:05:54.833 14:06:53 -- accel/accel.sh@20 -- # read -r var val 00:05:54.833 14:06:53 -- accel/accel.sh@21 -- # val=Yes 00:05:54.833 14:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.833 14:06:53 -- accel/accel.sh@20 -- # IFS=: 00:05:54.833 14:06:53 -- accel/accel.sh@20 -- # read -r var val 00:05:54.833 14:06:53 -- accel/accel.sh@21 -- # val= 00:05:54.833 14:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.833 14:06:53 -- accel/accel.sh@20 -- # IFS=: 00:05:54.833 14:06:53 -- accel/accel.sh@20 -- # read -r var val 00:05:54.833 14:06:53 -- accel/accel.sh@21 -- # val= 00:05:54.833 14:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.833 14:06:53 -- accel/accel.sh@20 -- # IFS=: 00:05:54.833 14:06:53 -- accel/accel.sh@20 -- # read -r var val 00:05:56.207 14:06:54 -- accel/accel.sh@21 -- # val= 00:05:56.207 14:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.207 14:06:54 -- accel/accel.sh@20 -- # IFS=: 00:05:56.207 14:06:54 -- accel/accel.sh@20 -- # read -r var val 00:05:56.207 14:06:54 -- accel/accel.sh@21 -- # val= 00:05:56.207 14:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.207 14:06:54 -- accel/accel.sh@20 -- # IFS=: 00:05:56.207 14:06:54 -- accel/accel.sh@20 -- # read -r var val 00:05:56.207 14:06:54 -- accel/accel.sh@21 -- # val= 00:05:56.207 14:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.207 14:06:54 -- accel/accel.sh@20 -- # IFS=: 00:05:56.207 14:06:54 -- accel/accel.sh@20 -- # read -r var val 00:05:56.207 14:06:54 -- accel/accel.sh@21 -- # val= 00:05:56.207 14:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.207 14:06:54 -- accel/accel.sh@20 -- # IFS=: 00:05:56.207 14:06:54 -- accel/accel.sh@20 -- # read -r var val 00:05:56.207 14:06:54 -- accel/accel.sh@21 -- # val= 00:05:56.207 14:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.207 14:06:54 -- accel/accel.sh@20 -- # IFS=: 00:05:56.207 14:06:54 -- accel/accel.sh@20 -- # read -r var val 00:05:56.207 14:06:54 -- accel/accel.sh@21 -- # val= 00:05:56.207 14:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.207 14:06:54 -- accel/accel.sh@20 -- # IFS=: 00:05:56.207 14:06:54 -- accel/accel.sh@20 -- # read -r var val 00:05:56.207 ************************************ 00:05:56.207 END TEST accel_fill 00:05:56.207 ************************************ 00:05:56.207 14:06:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:56.207 14:06:54 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:05:56.207 14:06:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.207 00:05:56.207 real 0m3.853s 00:05:56.207 user 0m3.409s 00:05:56.207 sys 0m0.239s 00:05:56.207 14:06:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:56.207 14:06:54 -- common/autotest_common.sh@10 -- # set +x 00:05:56.207 14:06:54 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:56.207 14:06:54 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:56.207 14:06:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.207 14:06:54 -- common/autotest_common.sh@10 -- # set +x 00:05:56.207 ************************************ 00:05:56.207 START TEST accel_copy_crc32c 00:05:56.207 ************************************ 00:05:56.207 14:06:54 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:05:56.207 14:06:54 -- accel/accel.sh@16 -- # local accel_opc 00:05:56.207 14:06:54 -- accel/accel.sh@17 -- # local accel_module 00:05:56.207 14:06:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:56.207 14:06:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:56.207 14:06:54 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.207 14:06:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:56.207 14:06:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.207 14:06:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.207 14:06:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:56.207 14:06:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:56.207 14:06:54 -- accel/accel.sh@41 -- # local IFS=, 00:05:56.207 14:06:54 -- accel/accel.sh@42 -- # jq -r . 00:05:56.466 [2024-11-19 14:06:54.789423] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:56.466 [2024-11-19 14:06:54.789535] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58783 ] 00:05:56.466 [2024-11-19 14:06:54.939936] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.725 [2024-11-19 14:06:55.120939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.628 14:06:56 -- accel/accel.sh@18 -- # out=' 00:05:58.628 SPDK Configuration: 00:05:58.628 Core mask: 0x1 00:05:58.628 00:05:58.628 Accel Perf Configuration: 00:05:58.628 Workload Type: copy_crc32c 00:05:58.628 CRC-32C seed: 0 00:05:58.628 Vector size: 4096 bytes 00:05:58.628 Transfer size: 4096 bytes 00:05:58.628 Vector count 1 00:05:58.628 Module: software 00:05:58.628 Queue depth: 32 00:05:58.628 Allocate depth: 32 00:05:58.628 # threads/core: 1 00:05:58.628 Run time: 1 seconds 00:05:58.628 Verify: Yes 00:05:58.628 00:05:58.628 Running for 1 seconds... 00:05:58.628 00:05:58.628 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:58.628 ------------------------------------------------------------------------------------ 00:05:58.628 0,0 237600/s 928 MiB/s 0 0 00:05:58.628 ==================================================================================== 00:05:58.628 Total 237600/s 928 MiB/s 0 0' 00:05:58.628 14:06:56 -- accel/accel.sh@20 -- # IFS=: 00:05:58.628 14:06:56 -- accel/accel.sh@20 -- # read -r var val 00:05:58.628 14:06:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:58.628 14:06:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:58.628 14:06:56 -- accel/accel.sh@12 -- # build_accel_config 00:05:58.628 14:06:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:58.628 14:06:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.628 14:06:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.628 14:06:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:58.628 14:06:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:58.628 14:06:56 -- accel/accel.sh@41 -- # local IFS=, 00:05:58.628 14:06:56 -- accel/accel.sh@42 -- # jq -r . 00:05:58.628 [2024-11-19 14:06:56.915316] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:58.628 [2024-11-19 14:06:56.915430] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58809 ] 00:05:58.628 [2024-11-19 14:06:57.064584] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.887 [2024-11-19 14:06:57.245779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.887 14:06:57 -- accel/accel.sh@21 -- # val= 00:05:58.887 14:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.887 14:06:57 -- accel/accel.sh@20 -- # IFS=: 00:05:58.887 14:06:57 -- accel/accel.sh@20 -- # read -r var val 00:05:58.887 14:06:57 -- accel/accel.sh@21 -- # val= 00:05:58.887 14:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.887 14:06:57 -- accel/accel.sh@20 -- # IFS=: 00:05:58.887 14:06:57 -- accel/accel.sh@20 -- # read -r var val 00:05:58.887 14:06:57 -- accel/accel.sh@21 -- # val=0x1 00:05:58.887 14:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.887 14:06:57 -- accel/accel.sh@20 -- # IFS=: 00:05:58.887 14:06:57 -- accel/accel.sh@20 -- # read -r var val 00:05:58.887 14:06:57 -- accel/accel.sh@21 -- # val= 00:05:58.887 14:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.887 14:06:57 -- accel/accel.sh@20 -- # IFS=: 00:05:58.887 14:06:57 -- accel/accel.sh@20 -- # read -r var val 00:05:58.887 14:06:57 -- accel/accel.sh@21 -- # val= 00:05:58.887 14:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.887 14:06:57 -- accel/accel.sh@20 -- # IFS=: 00:05:58.887 14:06:57 -- accel/accel.sh@20 -- # read -r var val 00:05:58.887 14:06:57 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:58.887 14:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.887 14:06:57 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:58.887 14:06:57 -- accel/accel.sh@20 -- # IFS=: 00:05:58.887 14:06:57 -- accel/accel.sh@20 -- # read -r var val 00:05:58.887 14:06:57 -- accel/accel.sh@21 -- # val=0 00:05:58.887 14:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.887 14:06:57 -- accel/accel.sh@20 -- # IFS=: 00:05:58.887 14:06:57 -- accel/accel.sh@20 -- # read -r var val 00:05:58.887 14:06:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:58.887 14:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.887 14:06:57 -- accel/accel.sh@20 -- # IFS=: 00:05:58.887 14:06:57 -- accel/accel.sh@20 -- # read -r var val 00:05:58.887 14:06:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:58.887 14:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.887 14:06:57 -- accel/accel.sh@20 -- # IFS=: 00:05:58.887 14:06:57 -- accel/accel.sh@20 -- # read -r var val 00:05:58.887 14:06:57 -- accel/accel.sh@21 -- # val= 00:05:58.887 14:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.887 14:06:57 -- accel/accel.sh@20 -- # IFS=: 00:05:58.887 14:06:57 -- accel/accel.sh@20 -- # read -r var val 00:05:58.887 14:06:57 -- accel/accel.sh@21 -- # val=software 00:05:58.887 14:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.887 14:06:57 -- accel/accel.sh@23 -- # accel_module=software 00:05:58.887 14:06:57 -- accel/accel.sh@20 -- # IFS=: 00:05:58.887 14:06:57 -- accel/accel.sh@20 -- # read -r var val 00:05:58.887 14:06:57 -- accel/accel.sh@21 -- # val=32 00:05:58.887 14:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.887 14:06:57 -- accel/accel.sh@20 -- # IFS=: 00:05:58.887 14:06:57 -- accel/accel.sh@20 -- # read -r var val 00:05:58.887 14:06:57 -- accel/accel.sh@21 -- # val=32 00:05:58.887 14:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.887 14:06:57 -- accel/accel.sh@20 -- # IFS=: 00:05:58.887 14:06:57 -- accel/accel.sh@20 -- # read -r var val 00:05:58.887 14:06:57 -- accel/accel.sh@21 -- # val=1 00:05:58.887 14:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.887 14:06:57 -- accel/accel.sh@20 -- # IFS=: 00:05:58.887 14:06:57 -- accel/accel.sh@20 -- # read -r var val 00:05:58.887 14:06:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:58.887 14:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.887 14:06:57 -- accel/accel.sh@20 -- # IFS=: 00:05:58.887 14:06:57 -- accel/accel.sh@20 -- # read -r var val 00:05:58.887 14:06:57 -- accel/accel.sh@21 -- # val=Yes 00:05:58.887 14:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.887 14:06:57 -- accel/accel.sh@20 -- # IFS=: 00:05:58.887 14:06:57 -- accel/accel.sh@20 -- # read -r var val 00:05:58.887 14:06:57 -- accel/accel.sh@21 -- # val= 00:05:58.887 14:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.887 14:06:57 -- accel/accel.sh@20 -- # IFS=: 00:05:58.887 14:06:57 -- accel/accel.sh@20 -- # read -r var val 00:05:58.887 14:06:57 -- accel/accel.sh@21 -- # val= 00:05:58.887 14:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.887 14:06:57 -- accel/accel.sh@20 -- # IFS=: 00:05:58.887 14:06:57 -- accel/accel.sh@20 -- # read -r var val 00:06:00.806 14:06:58 -- accel/accel.sh@21 -- # val= 00:06:00.806 14:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.806 14:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:00.806 14:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:00.806 14:06:58 -- accel/accel.sh@21 -- # val= 00:06:00.806 14:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.806 14:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:00.806 14:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:00.806 14:06:58 -- accel/accel.sh@21 -- # val= 00:06:00.806 14:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.806 14:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:00.806 14:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:00.806 14:06:58 -- accel/accel.sh@21 -- # val= 00:06:00.806 14:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.806 14:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:00.806 14:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:00.806 14:06:58 -- accel/accel.sh@21 -- # val= 00:06:00.807 14:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.807 14:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:00.807 14:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:00.807 14:06:58 -- accel/accel.sh@21 -- # val= 00:06:00.807 14:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.807 14:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:00.807 14:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:00.807 14:06:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:00.807 14:06:59 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:00.807 14:06:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.807 00:06:00.807 real 0m4.250s 00:06:00.807 user 0m3.779s 00:06:00.807 sys 0m0.263s 00:06:00.807 14:06:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:00.807 ************************************ 00:06:00.807 14:06:59 -- common/autotest_common.sh@10 -- # set +x 00:06:00.807 END TEST accel_copy_crc32c 00:06:00.807 ************************************ 00:06:00.807 14:06:59 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:00.807 14:06:59 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:00.807 14:06:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.807 14:06:59 -- common/autotest_common.sh@10 -- # set +x 00:06:00.807 ************************************ 00:06:00.807 START TEST accel_copy_crc32c_C2 00:06:00.807 ************************************ 00:06:00.807 14:06:59 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:00.807 14:06:59 -- accel/accel.sh@16 -- # local accel_opc 00:06:00.807 14:06:59 -- accel/accel.sh@17 -- # local accel_module 00:06:00.807 14:06:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:00.808 14:06:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:00.808 14:06:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:00.808 14:06:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:00.808 14:06:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.808 14:06:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.808 14:06:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:00.808 14:06:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:00.808 14:06:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:00.808 14:06:59 -- accel/accel.sh@42 -- # jq -r . 00:06:00.808 [2024-11-19 14:06:59.084868] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:00.808 [2024-11-19 14:06:59.084986] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58856 ] 00:06:00.808 [2024-11-19 14:06:59.235103] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.078 [2024-11-19 14:06:59.410584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.980 14:07:01 -- accel/accel.sh@18 -- # out=' 00:06:02.980 SPDK Configuration: 00:06:02.980 Core mask: 0x1 00:06:02.980 00:06:02.980 Accel Perf Configuration: 00:06:02.980 Workload Type: copy_crc32c 00:06:02.980 CRC-32C seed: 0 00:06:02.980 Vector size: 4096 bytes 00:06:02.980 Transfer size: 8192 bytes 00:06:02.980 Vector count 2 00:06:02.980 Module: software 00:06:02.980 Queue depth: 32 00:06:02.980 Allocate depth: 32 00:06:02.980 # threads/core: 1 00:06:02.980 Run time: 1 seconds 00:06:02.980 Verify: Yes 00:06:02.980 00:06:02.980 Running for 1 seconds... 00:06:02.980 00:06:02.980 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:02.980 ------------------------------------------------------------------------------------ 00:06:02.980 0,0 177152/s 1384 MiB/s 0 0 00:06:02.980 ==================================================================================== 00:06:02.981 Total 177152/s 692 MiB/s 0 0' 00:06:02.981 14:07:01 -- accel/accel.sh@20 -- # IFS=: 00:06:02.981 14:07:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:02.981 14:07:01 -- accel/accel.sh@20 -- # read -r var val 00:06:02.981 14:07:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:02.981 14:07:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:02.981 14:07:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:02.981 14:07:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.981 14:07:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.981 14:07:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:02.981 14:07:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:02.981 14:07:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:02.981 14:07:01 -- accel/accel.sh@42 -- # jq -r . 00:06:02.981 [2024-11-19 14:07:01.122320] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:02.981 [2024-11-19 14:07:01.122422] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58882 ] 00:06:02.981 [2024-11-19 14:07:01.272697] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.981 [2024-11-19 14:07:01.410832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.981 14:07:01 -- accel/accel.sh@21 -- # val= 00:06:02.981 14:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.981 14:07:01 -- accel/accel.sh@20 -- # IFS=: 00:06:02.981 14:07:01 -- accel/accel.sh@20 -- # read -r var val 00:06:02.981 14:07:01 -- accel/accel.sh@21 -- # val= 00:06:02.981 14:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.981 14:07:01 -- accel/accel.sh@20 -- # IFS=: 00:06:02.981 14:07:01 -- accel/accel.sh@20 -- # read -r var val 00:06:02.981 14:07:01 -- accel/accel.sh@21 -- # val=0x1 00:06:02.981 14:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.981 14:07:01 -- accel/accel.sh@20 -- # IFS=: 00:06:02.981 14:07:01 -- accel/accel.sh@20 -- # read -r var val 00:06:02.981 14:07:01 -- accel/accel.sh@21 -- # val= 00:06:02.981 14:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.981 14:07:01 -- accel/accel.sh@20 -- # IFS=: 00:06:02.981 14:07:01 -- accel/accel.sh@20 -- # read -r var val 00:06:02.981 14:07:01 -- accel/accel.sh@21 -- # val= 00:06:02.981 14:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.981 14:07:01 -- accel/accel.sh@20 -- # IFS=: 00:06:02.981 14:07:01 -- accel/accel.sh@20 -- # read -r var val 00:06:02.981 14:07:01 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:02.981 14:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.981 14:07:01 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:02.981 14:07:01 -- accel/accel.sh@20 -- # IFS=: 00:06:02.981 14:07:01 -- accel/accel.sh@20 -- # read -r var val 00:06:02.981 14:07:01 -- accel/accel.sh@21 -- # val=0 00:06:02.981 14:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.981 14:07:01 -- accel/accel.sh@20 -- # IFS=: 00:06:02.981 14:07:01 -- accel/accel.sh@20 -- # read -r var val 00:06:02.981 14:07:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:02.981 14:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.981 14:07:01 -- accel/accel.sh@20 -- # IFS=: 00:06:02.981 14:07:01 -- accel/accel.sh@20 -- # read -r var val 00:06:02.981 14:07:01 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:02.981 14:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.981 14:07:01 -- accel/accel.sh@20 -- # IFS=: 00:06:02.981 14:07:01 -- accel/accel.sh@20 -- # read -r var val 00:06:02.981 14:07:01 -- accel/accel.sh@21 -- # val= 00:06:02.981 14:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.981 14:07:01 -- accel/accel.sh@20 -- # IFS=: 00:06:02.981 14:07:01 -- accel/accel.sh@20 -- # read -r var val 00:06:02.981 14:07:01 -- accel/accel.sh@21 -- # val=software 00:06:02.981 14:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.981 14:07:01 -- accel/accel.sh@23 -- # accel_module=software 00:06:02.981 14:07:01 -- accel/accel.sh@20 -- # IFS=: 00:06:02.981 14:07:01 -- accel/accel.sh@20 -- # read -r var val 00:06:02.981 14:07:01 -- accel/accel.sh@21 -- # val=32 00:06:02.981 14:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.981 14:07:01 -- accel/accel.sh@20 -- # IFS=: 00:06:02.981 14:07:01 -- accel/accel.sh@20 -- # read -r var val 00:06:02.981 14:07:01 -- accel/accel.sh@21 -- # val=32 00:06:02.981 14:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.981 14:07:01 -- accel/accel.sh@20 -- # IFS=: 00:06:02.981 14:07:01 -- accel/accel.sh@20 -- # read -r var val 00:06:02.981 14:07:01 -- accel/accel.sh@21 -- # val=1 00:06:02.981 14:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.981 14:07:01 -- accel/accel.sh@20 -- # IFS=: 00:06:02.981 14:07:01 -- accel/accel.sh@20 -- # read -r var val 00:06:02.981 14:07:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:02.981 14:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.981 14:07:01 -- accel/accel.sh@20 -- # IFS=: 00:06:02.981 14:07:01 -- accel/accel.sh@20 -- # read -r var val 00:06:02.981 14:07:01 -- accel/accel.sh@21 -- # val=Yes 00:06:02.981 14:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.981 14:07:01 -- accel/accel.sh@20 -- # IFS=: 00:06:02.981 14:07:01 -- accel/accel.sh@20 -- # read -r var val 00:06:02.981 14:07:01 -- accel/accel.sh@21 -- # val= 00:06:02.981 14:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.981 14:07:01 -- accel/accel.sh@20 -- # IFS=: 00:06:02.981 14:07:01 -- accel/accel.sh@20 -- # read -r var val 00:06:02.981 14:07:01 -- accel/accel.sh@21 -- # val= 00:06:02.981 14:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.981 14:07:01 -- accel/accel.sh@20 -- # IFS=: 00:06:02.981 14:07:01 -- accel/accel.sh@20 -- # read -r var val 00:06:04.931 14:07:02 -- accel/accel.sh@21 -- # val= 00:06:04.931 14:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.931 14:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:04.931 14:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:04.931 14:07:02 -- accel/accel.sh@21 -- # val= 00:06:04.931 14:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.931 14:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:04.931 14:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:04.931 14:07:02 -- accel/accel.sh@21 -- # val= 00:06:04.931 14:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.931 14:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:04.931 14:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:04.931 14:07:02 -- accel/accel.sh@21 -- # val= 00:06:04.931 14:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.931 14:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:04.931 14:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:04.931 14:07:02 -- accel/accel.sh@21 -- # val= 00:06:04.931 14:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.931 14:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:04.931 14:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:04.931 14:07:02 -- accel/accel.sh@21 -- # val= 00:06:04.931 14:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.932 14:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:04.932 14:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:04.932 14:07:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:04.932 14:07:02 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:04.932 14:07:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.932 00:06:04.932 real 0m3.943s 00:06:04.932 user 0m3.476s 00:06:04.932 sys 0m0.258s 00:06:04.932 14:07:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:04.932 14:07:02 -- common/autotest_common.sh@10 -- # set +x 00:06:04.932 ************************************ 00:06:04.932 END TEST accel_copy_crc32c_C2 00:06:04.932 ************************************ 00:06:04.932 14:07:03 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:04.932 14:07:03 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:04.932 14:07:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.932 14:07:03 -- common/autotest_common.sh@10 -- # set +x 00:06:04.932 ************************************ 00:06:04.932 START TEST accel_dualcast 00:06:04.932 ************************************ 00:06:04.932 14:07:03 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:06:04.932 14:07:03 -- accel/accel.sh@16 -- # local accel_opc 00:06:04.932 14:07:03 -- accel/accel.sh@17 -- # local accel_module 00:06:04.932 14:07:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:04.932 14:07:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:04.932 14:07:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:04.932 14:07:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:04.932 14:07:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.932 14:07:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.932 14:07:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:04.932 14:07:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:04.932 14:07:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:04.932 14:07:03 -- accel/accel.sh@42 -- # jq -r . 00:06:04.932 [2024-11-19 14:07:03.074162] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:04.932 [2024-11-19 14:07:03.074251] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58925 ] 00:06:04.932 [2024-11-19 14:07:03.216762] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.932 [2024-11-19 14:07:03.356146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.830 14:07:04 -- accel/accel.sh@18 -- # out=' 00:06:06.830 SPDK Configuration: 00:06:06.830 Core mask: 0x1 00:06:06.830 00:06:06.830 Accel Perf Configuration: 00:06:06.830 Workload Type: dualcast 00:06:06.830 Transfer size: 4096 bytes 00:06:06.830 Vector count 1 00:06:06.830 Module: software 00:06:06.830 Queue depth: 32 00:06:06.830 Allocate depth: 32 00:06:06.830 # threads/core: 1 00:06:06.830 Run time: 1 seconds 00:06:06.830 Verify: Yes 00:06:06.830 00:06:06.830 Running for 1 seconds... 00:06:06.830 00:06:06.830 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:06.830 ------------------------------------------------------------------------------------ 00:06:06.830 0,0 442976/s 1730 MiB/s 0 0 00:06:06.830 ==================================================================================== 00:06:06.830 Total 442976/s 1730 MiB/s 0 0' 00:06:06.830 14:07:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:06.830 14:07:04 -- accel/accel.sh@20 -- # IFS=: 00:06:06.830 14:07:04 -- accel/accel.sh@20 -- # read -r var val 00:06:06.830 14:07:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:06.830 14:07:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.830 14:07:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:06.830 14:07:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.830 14:07:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.830 14:07:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:06.830 14:07:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:06.830 14:07:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:06.830 14:07:04 -- accel/accel.sh@42 -- # jq -r . 00:06:06.830 [2024-11-19 14:07:04.969528] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:06.830 [2024-11-19 14:07:04.969655] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58945 ] 00:06:06.830 [2024-11-19 14:07:05.122447] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.830 [2024-11-19 14:07:05.290554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.088 14:07:05 -- accel/accel.sh@21 -- # val= 00:06:07.088 14:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.088 14:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:07.088 14:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:07.088 14:07:05 -- accel/accel.sh@21 -- # val= 00:06:07.088 14:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.088 14:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:07.088 14:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:07.088 14:07:05 -- accel/accel.sh@21 -- # val=0x1 00:06:07.088 14:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.088 14:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:07.088 14:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:07.088 14:07:05 -- accel/accel.sh@21 -- # val= 00:06:07.088 14:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.088 14:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:07.088 14:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:07.088 14:07:05 -- accel/accel.sh@21 -- # val= 00:06:07.088 14:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.088 14:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:07.088 14:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:07.088 14:07:05 -- accel/accel.sh@21 -- # val=dualcast 00:06:07.088 14:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.088 14:07:05 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:07.088 14:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:07.088 14:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:07.088 14:07:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:07.088 14:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.088 14:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:07.088 14:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:07.088 14:07:05 -- accel/accel.sh@21 -- # val= 00:06:07.088 14:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.088 14:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:07.088 14:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:07.088 14:07:05 -- accel/accel.sh@21 -- # val=software 00:06:07.088 14:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.089 14:07:05 -- accel/accel.sh@23 -- # accel_module=software 00:06:07.089 14:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:07.089 14:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:07.089 14:07:05 -- accel/accel.sh@21 -- # val=32 00:06:07.089 14:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.089 14:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:07.089 14:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:07.089 14:07:05 -- accel/accel.sh@21 -- # val=32 00:06:07.089 14:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.089 14:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:07.089 14:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:07.089 14:07:05 -- accel/accel.sh@21 -- # val=1 00:06:07.089 14:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.089 14:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:07.089 14:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:07.089 14:07:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:07.089 14:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.089 14:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:07.089 14:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:07.089 14:07:05 -- accel/accel.sh@21 -- # val=Yes 00:06:07.089 14:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.089 14:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:07.089 14:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:07.089 14:07:05 -- accel/accel.sh@21 -- # val= 00:06:07.089 14:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.089 14:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:07.089 14:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:07.089 14:07:05 -- accel/accel.sh@21 -- # val= 00:06:07.089 14:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.089 14:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:07.089 14:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:08.464 14:07:06 -- accel/accel.sh@21 -- # val= 00:06:08.464 14:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.464 14:07:06 -- accel/accel.sh@20 -- # IFS=: 00:06:08.464 14:07:06 -- accel/accel.sh@20 -- # read -r var val 00:06:08.464 14:07:06 -- accel/accel.sh@21 -- # val= 00:06:08.464 14:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.464 14:07:06 -- accel/accel.sh@20 -- # IFS=: 00:06:08.464 14:07:06 -- accel/accel.sh@20 -- # read -r var val 00:06:08.464 14:07:06 -- accel/accel.sh@21 -- # val= 00:06:08.464 14:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.464 14:07:06 -- accel/accel.sh@20 -- # IFS=: 00:06:08.464 14:07:06 -- accel/accel.sh@20 -- # read -r var val 00:06:08.464 14:07:06 -- accel/accel.sh@21 -- # val= 00:06:08.464 14:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.464 14:07:06 -- accel/accel.sh@20 -- # IFS=: 00:06:08.464 14:07:06 -- accel/accel.sh@20 -- # read -r var val 00:06:08.464 14:07:06 -- accel/accel.sh@21 -- # val= 00:06:08.465 14:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.465 14:07:06 -- accel/accel.sh@20 -- # IFS=: 00:06:08.465 14:07:06 -- accel/accel.sh@20 -- # read -r var val 00:06:08.465 14:07:06 -- accel/accel.sh@21 -- # val= 00:06:08.465 14:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.465 14:07:06 -- accel/accel.sh@20 -- # IFS=: 00:06:08.465 14:07:06 -- accel/accel.sh@20 -- # read -r var val 00:06:08.465 14:07:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:08.465 14:07:06 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:08.465 14:07:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.465 00:06:08.465 real 0m3.863s 00:06:08.465 user 0m3.424s 00:06:08.465 sys 0m0.234s 00:06:08.465 14:07:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.465 14:07:06 -- common/autotest_common.sh@10 -- # set +x 00:06:08.465 ************************************ 00:06:08.465 END TEST accel_dualcast 00:06:08.465 ************************************ 00:06:08.465 14:07:06 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:08.465 14:07:06 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:08.465 14:07:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.465 14:07:06 -- common/autotest_common.sh@10 -- # set +x 00:06:08.465 ************************************ 00:06:08.465 START TEST accel_compare 00:06:08.465 ************************************ 00:06:08.465 14:07:06 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:06:08.465 14:07:06 -- accel/accel.sh@16 -- # local accel_opc 00:06:08.465 14:07:06 -- accel/accel.sh@17 -- # local accel_module 00:06:08.465 14:07:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:08.465 14:07:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:08.465 14:07:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:08.465 14:07:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:08.465 14:07:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.465 14:07:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.465 14:07:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:08.465 14:07:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:08.465 14:07:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:08.465 14:07:06 -- accel/accel.sh@42 -- # jq -r . 00:06:08.465 [2024-11-19 14:07:06.975011] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:08.465 [2024-11-19 14:07:06.975131] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58986 ] 00:06:08.724 [2024-11-19 14:07:07.120863] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.724 [2024-11-19 14:07:07.258767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.627 14:07:08 -- accel/accel.sh@18 -- # out=' 00:06:10.627 SPDK Configuration: 00:06:10.627 Core mask: 0x1 00:06:10.627 00:06:10.627 Accel Perf Configuration: 00:06:10.627 Workload Type: compare 00:06:10.627 Transfer size: 4096 bytes 00:06:10.627 Vector count 1 00:06:10.627 Module: software 00:06:10.627 Queue depth: 32 00:06:10.627 Allocate depth: 32 00:06:10.627 # threads/core: 1 00:06:10.627 Run time: 1 seconds 00:06:10.627 Verify: Yes 00:06:10.627 00:06:10.627 Running for 1 seconds... 00:06:10.627 00:06:10.627 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:10.627 ------------------------------------------------------------------------------------ 00:06:10.627 0,0 559840/s 2186 MiB/s 0 0 00:06:10.627 ==================================================================================== 00:06:10.627 Total 559840/s 2186 MiB/s 0 0' 00:06:10.627 14:07:08 -- accel/accel.sh@20 -- # IFS=: 00:06:10.627 14:07:08 -- accel/accel.sh@20 -- # read -r var val 00:06:10.627 14:07:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:10.627 14:07:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:10.627 14:07:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.627 14:07:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:10.627 14:07:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.627 14:07:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.627 14:07:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:10.627 14:07:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:10.627 14:07:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:10.627 14:07:08 -- accel/accel.sh@42 -- # jq -r . 00:06:10.627 [2024-11-19 14:07:08.881384] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:10.627 [2024-11-19 14:07:08.881942] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59012 ] 00:06:10.627 [2024-11-19 14:07:09.029022] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.627 [2024-11-19 14:07:09.168845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.885 14:07:09 -- accel/accel.sh@21 -- # val= 00:06:10.885 14:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.885 14:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:10.885 14:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:10.885 14:07:09 -- accel/accel.sh@21 -- # val= 00:06:10.885 14:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.885 14:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:10.885 14:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:10.885 14:07:09 -- accel/accel.sh@21 -- # val=0x1 00:06:10.885 14:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.885 14:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:10.885 14:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:10.885 14:07:09 -- accel/accel.sh@21 -- # val= 00:06:10.885 14:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.885 14:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:10.885 14:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:10.886 14:07:09 -- accel/accel.sh@21 -- # val= 00:06:10.886 14:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.886 14:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:10.886 14:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:10.886 14:07:09 -- accel/accel.sh@21 -- # val=compare 00:06:10.886 14:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.886 14:07:09 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:10.886 14:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:10.886 14:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:10.886 14:07:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:10.886 14:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.886 14:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:10.886 14:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:10.886 14:07:09 -- accel/accel.sh@21 -- # val= 00:06:10.886 14:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.886 14:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:10.886 14:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:10.886 14:07:09 -- accel/accel.sh@21 -- # val=software 00:06:10.886 14:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.886 14:07:09 -- accel/accel.sh@23 -- # accel_module=software 00:06:10.886 14:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:10.886 14:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:10.886 14:07:09 -- accel/accel.sh@21 -- # val=32 00:06:10.886 14:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.886 14:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:10.886 14:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:10.886 14:07:09 -- accel/accel.sh@21 -- # val=32 00:06:10.886 14:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.886 14:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:10.886 14:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:10.886 14:07:09 -- accel/accel.sh@21 -- # val=1 00:06:10.886 14:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.886 14:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:10.886 14:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:10.886 14:07:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:10.886 14:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.886 14:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:10.886 14:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:10.886 14:07:09 -- accel/accel.sh@21 -- # val=Yes 00:06:10.886 14:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.886 14:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:10.886 14:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:10.886 14:07:09 -- accel/accel.sh@21 -- # val= 00:06:10.886 14:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.886 14:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:10.886 14:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:10.886 14:07:09 -- accel/accel.sh@21 -- # val= 00:06:10.886 14:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.886 14:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:10.886 14:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:12.260 14:07:10 -- accel/accel.sh@21 -- # val= 00:06:12.260 14:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.260 14:07:10 -- accel/accel.sh@20 -- # IFS=: 00:06:12.260 14:07:10 -- accel/accel.sh@20 -- # read -r var val 00:06:12.260 14:07:10 -- accel/accel.sh@21 -- # val= 00:06:12.260 14:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.260 14:07:10 -- accel/accel.sh@20 -- # IFS=: 00:06:12.260 14:07:10 -- accel/accel.sh@20 -- # read -r var val 00:06:12.260 14:07:10 -- accel/accel.sh@21 -- # val= 00:06:12.260 14:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.260 14:07:10 -- accel/accel.sh@20 -- # IFS=: 00:06:12.260 14:07:10 -- accel/accel.sh@20 -- # read -r var val 00:06:12.260 14:07:10 -- accel/accel.sh@21 -- # val= 00:06:12.260 14:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.260 14:07:10 -- accel/accel.sh@20 -- # IFS=: 00:06:12.260 14:07:10 -- accel/accel.sh@20 -- # read -r var val 00:06:12.260 14:07:10 -- accel/accel.sh@21 -- # val= 00:06:12.260 14:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.260 14:07:10 -- accel/accel.sh@20 -- # IFS=: 00:06:12.260 14:07:10 -- accel/accel.sh@20 -- # read -r var val 00:06:12.260 14:07:10 -- accel/accel.sh@21 -- # val= 00:06:12.260 14:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.260 14:07:10 -- accel/accel.sh@20 -- # IFS=: 00:06:12.260 14:07:10 -- accel/accel.sh@20 -- # read -r var val 00:06:12.260 ************************************ 00:06:12.260 END TEST accel_compare 00:06:12.260 ************************************ 00:06:12.260 14:07:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:12.260 14:07:10 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:12.260 14:07:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.260 00:06:12.260 real 0m3.806s 00:06:12.260 user 0m3.381s 00:06:12.260 sys 0m0.220s 00:06:12.260 14:07:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:12.260 14:07:10 -- common/autotest_common.sh@10 -- # set +x 00:06:12.260 14:07:10 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:12.260 14:07:10 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:12.260 14:07:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.260 14:07:10 -- common/autotest_common.sh@10 -- # set +x 00:06:12.260 ************************************ 00:06:12.260 START TEST accel_xor 00:06:12.260 ************************************ 00:06:12.260 14:07:10 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:06:12.260 14:07:10 -- accel/accel.sh@16 -- # local accel_opc 00:06:12.260 14:07:10 -- accel/accel.sh@17 -- # local accel_module 00:06:12.260 14:07:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:12.260 14:07:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:12.260 14:07:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.260 14:07:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.260 14:07:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.260 14:07:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.260 14:07:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.260 14:07:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.260 14:07:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.260 14:07:10 -- accel/accel.sh@42 -- # jq -r . 00:06:12.260 [2024-11-19 14:07:10.812982] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:12.260 [2024-11-19 14:07:10.813083] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59054 ] 00:06:12.518 [2024-11-19 14:07:10.960704] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.777 [2024-11-19 14:07:11.098311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.151 14:07:12 -- accel/accel.sh@18 -- # out=' 00:06:14.151 SPDK Configuration: 00:06:14.151 Core mask: 0x1 00:06:14.151 00:06:14.151 Accel Perf Configuration: 00:06:14.151 Workload Type: xor 00:06:14.151 Source buffers: 2 00:06:14.151 Transfer size: 4096 bytes 00:06:14.151 Vector count 1 00:06:14.151 Module: software 00:06:14.151 Queue depth: 32 00:06:14.151 Allocate depth: 32 00:06:14.151 # threads/core: 1 00:06:14.151 Run time: 1 seconds 00:06:14.151 Verify: Yes 00:06:14.151 00:06:14.151 Running for 1 seconds... 00:06:14.151 00:06:14.151 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:14.151 ------------------------------------------------------------------------------------ 00:06:14.151 0,0 447360/s 1747 MiB/s 0 0 00:06:14.151 ==================================================================================== 00:06:14.151 Total 447360/s 1747 MiB/s 0 0' 00:06:14.151 14:07:12 -- accel/accel.sh@20 -- # IFS=: 00:06:14.151 14:07:12 -- accel/accel.sh@20 -- # read -r var val 00:06:14.151 14:07:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:14.151 14:07:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:14.151 14:07:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.151 14:07:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.151 14:07:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.151 14:07:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.151 14:07:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.151 14:07:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.151 14:07:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.151 14:07:12 -- accel/accel.sh@42 -- # jq -r . 00:06:14.409 [2024-11-19 14:07:12.714003] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:14.409 [2024-11-19 14:07:12.714212] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59077 ] 00:06:14.409 [2024-11-19 14:07:12.860498] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.667 [2024-11-19 14:07:12.998643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.667 14:07:13 -- accel/accel.sh@21 -- # val= 00:06:14.667 14:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.667 14:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:14.667 14:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:14.667 14:07:13 -- accel/accel.sh@21 -- # val= 00:06:14.667 14:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.667 14:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:14.667 14:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:14.667 14:07:13 -- accel/accel.sh@21 -- # val=0x1 00:06:14.667 14:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.667 14:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:14.667 14:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:14.667 14:07:13 -- accel/accel.sh@21 -- # val= 00:06:14.667 14:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.667 14:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:14.667 14:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:14.667 14:07:13 -- accel/accel.sh@21 -- # val= 00:06:14.667 14:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.668 14:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:14.668 14:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:14.668 14:07:13 -- accel/accel.sh@21 -- # val=xor 00:06:14.668 14:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.668 14:07:13 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:14.668 14:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:14.668 14:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:14.668 14:07:13 -- accel/accel.sh@21 -- # val=2 00:06:14.668 14:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.668 14:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:14.668 14:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:14.668 14:07:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:14.668 14:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.668 14:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:14.668 14:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:14.668 14:07:13 -- accel/accel.sh@21 -- # val= 00:06:14.668 14:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.668 14:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:14.668 14:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:14.668 14:07:13 -- accel/accel.sh@21 -- # val=software 00:06:14.668 14:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.668 14:07:13 -- accel/accel.sh@23 -- # accel_module=software 00:06:14.668 14:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:14.668 14:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:14.668 14:07:13 -- accel/accel.sh@21 -- # val=32 00:06:14.668 14:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.668 14:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:14.668 14:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:14.668 14:07:13 -- accel/accel.sh@21 -- # val=32 00:06:14.668 14:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.668 14:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:14.668 14:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:14.668 14:07:13 -- accel/accel.sh@21 -- # val=1 00:06:14.668 14:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.668 14:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:14.668 14:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:14.668 14:07:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:14.668 14:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.668 14:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:14.668 14:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:14.668 14:07:13 -- accel/accel.sh@21 -- # val=Yes 00:06:14.668 14:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.668 14:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:14.668 14:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:14.668 14:07:13 -- accel/accel.sh@21 -- # val= 00:06:14.668 14:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.668 14:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:14.668 14:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:14.668 14:07:13 -- accel/accel.sh@21 -- # val= 00:06:14.668 14:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.668 14:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:14.668 14:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:16.059 14:07:14 -- accel/accel.sh@21 -- # val= 00:06:16.059 14:07:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.059 14:07:14 -- accel/accel.sh@20 -- # IFS=: 00:06:16.059 14:07:14 -- accel/accel.sh@20 -- # read -r var val 00:06:16.059 14:07:14 -- accel/accel.sh@21 -- # val= 00:06:16.059 14:07:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.059 14:07:14 -- accel/accel.sh@20 -- # IFS=: 00:06:16.059 14:07:14 -- accel/accel.sh@20 -- # read -r var val 00:06:16.059 14:07:14 -- accel/accel.sh@21 -- # val= 00:06:16.059 14:07:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.059 14:07:14 -- accel/accel.sh@20 -- # IFS=: 00:06:16.059 14:07:14 -- accel/accel.sh@20 -- # read -r var val 00:06:16.059 14:07:14 -- accel/accel.sh@21 -- # val= 00:06:16.059 14:07:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.059 14:07:14 -- accel/accel.sh@20 -- # IFS=: 00:06:16.059 14:07:14 -- accel/accel.sh@20 -- # read -r var val 00:06:16.059 14:07:14 -- accel/accel.sh@21 -- # val= 00:06:16.059 14:07:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.059 14:07:14 -- accel/accel.sh@20 -- # IFS=: 00:06:16.059 14:07:14 -- accel/accel.sh@20 -- # read -r var val 00:06:16.059 14:07:14 -- accel/accel.sh@21 -- # val= 00:06:16.059 14:07:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.059 14:07:14 -- accel/accel.sh@20 -- # IFS=: 00:06:16.059 14:07:14 -- accel/accel.sh@20 -- # read -r var val 00:06:16.059 14:07:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:16.059 14:07:14 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:16.059 14:07:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.059 00:06:16.059 real 0m3.797s 00:06:16.059 user 0m3.363s 00:06:16.059 sys 0m0.230s 00:06:16.059 14:07:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:16.059 ************************************ 00:06:16.059 END TEST accel_xor 00:06:16.059 ************************************ 00:06:16.059 14:07:14 -- common/autotest_common.sh@10 -- # set +x 00:06:16.059 14:07:14 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:16.059 14:07:14 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:16.059 14:07:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.059 14:07:14 -- common/autotest_common.sh@10 -- # set +x 00:06:16.059 ************************************ 00:06:16.059 START TEST accel_xor 00:06:16.059 ************************************ 00:06:16.059 14:07:14 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:06:16.059 14:07:14 -- accel/accel.sh@16 -- # local accel_opc 00:06:16.059 14:07:14 -- accel/accel.sh@17 -- # local accel_module 00:06:16.059 14:07:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:16.317 14:07:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:16.317 14:07:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.317 14:07:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:16.317 14:07:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.317 14:07:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.317 14:07:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:16.317 14:07:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:16.317 14:07:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:16.317 14:07:14 -- accel/accel.sh@42 -- # jq -r . 00:06:16.317 [2024-11-19 14:07:14.648988] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:16.317 [2024-11-19 14:07:14.649066] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59118 ] 00:06:16.317 [2024-11-19 14:07:14.789641] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.575 [2024-11-19 14:07:14.930668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.473 14:07:16 -- accel/accel.sh@18 -- # out=' 00:06:18.473 SPDK Configuration: 00:06:18.473 Core mask: 0x1 00:06:18.473 00:06:18.473 Accel Perf Configuration: 00:06:18.473 Workload Type: xor 00:06:18.473 Source buffers: 3 00:06:18.473 Transfer size: 4096 bytes 00:06:18.473 Vector count 1 00:06:18.473 Module: software 00:06:18.473 Queue depth: 32 00:06:18.473 Allocate depth: 32 00:06:18.473 # threads/core: 1 00:06:18.473 Run time: 1 seconds 00:06:18.473 Verify: Yes 00:06:18.473 00:06:18.473 Running for 1 seconds... 00:06:18.473 00:06:18.473 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:18.473 ------------------------------------------------------------------------------------ 00:06:18.473 0,0 424928/s 1659 MiB/s 0 0 00:06:18.473 ==================================================================================== 00:06:18.473 Total 424928/s 1659 MiB/s 0 0' 00:06:18.473 14:07:16 -- accel/accel.sh@20 -- # IFS=: 00:06:18.473 14:07:16 -- accel/accel.sh@20 -- # read -r var val 00:06:18.473 14:07:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:18.473 14:07:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:18.473 14:07:16 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.473 14:07:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:18.473 14:07:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.473 14:07:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.473 14:07:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:18.473 14:07:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:18.473 14:07:16 -- accel/accel.sh@41 -- # local IFS=, 00:06:18.473 14:07:16 -- accel/accel.sh@42 -- # jq -r . 00:06:18.473 [2024-11-19 14:07:16.555417] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:18.473 [2024-11-19 14:07:16.555628] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59144 ] 00:06:18.473 [2024-11-19 14:07:16.702124] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.473 [2024-11-19 14:07:16.840782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.473 14:07:16 -- accel/accel.sh@21 -- # val= 00:06:18.473 14:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.473 14:07:16 -- accel/accel.sh@20 -- # IFS=: 00:06:18.473 14:07:16 -- accel/accel.sh@20 -- # read -r var val 00:06:18.473 14:07:16 -- accel/accel.sh@21 -- # val= 00:06:18.473 14:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.473 14:07:16 -- accel/accel.sh@20 -- # IFS=: 00:06:18.473 14:07:16 -- accel/accel.sh@20 -- # read -r var val 00:06:18.473 14:07:16 -- accel/accel.sh@21 -- # val=0x1 00:06:18.473 14:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.473 14:07:16 -- accel/accel.sh@20 -- # IFS=: 00:06:18.473 14:07:16 -- accel/accel.sh@20 -- # read -r var val 00:06:18.473 14:07:16 -- accel/accel.sh@21 -- # val= 00:06:18.473 14:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.473 14:07:16 -- accel/accel.sh@20 -- # IFS=: 00:06:18.473 14:07:16 -- accel/accel.sh@20 -- # read -r var val 00:06:18.473 14:07:16 -- accel/accel.sh@21 -- # val= 00:06:18.473 14:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.473 14:07:16 -- accel/accel.sh@20 -- # IFS=: 00:06:18.473 14:07:16 -- accel/accel.sh@20 -- # read -r var val 00:06:18.473 14:07:16 -- accel/accel.sh@21 -- # val=xor 00:06:18.473 14:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.473 14:07:16 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:18.473 14:07:16 -- accel/accel.sh@20 -- # IFS=: 00:06:18.473 14:07:16 -- accel/accel.sh@20 -- # read -r var val 00:06:18.473 14:07:16 -- accel/accel.sh@21 -- # val=3 00:06:18.473 14:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.473 14:07:16 -- accel/accel.sh@20 -- # IFS=: 00:06:18.473 14:07:16 -- accel/accel.sh@20 -- # read -r var val 00:06:18.473 14:07:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:18.473 14:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.473 14:07:16 -- accel/accel.sh@20 -- # IFS=: 00:06:18.473 14:07:16 -- accel/accel.sh@20 -- # read -r var val 00:06:18.473 14:07:16 -- accel/accel.sh@21 -- # val= 00:06:18.473 14:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.473 14:07:16 -- accel/accel.sh@20 -- # IFS=: 00:06:18.473 14:07:16 -- accel/accel.sh@20 -- # read -r var val 00:06:18.473 14:07:16 -- accel/accel.sh@21 -- # val=software 00:06:18.473 14:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.473 14:07:16 -- accel/accel.sh@23 -- # accel_module=software 00:06:18.473 14:07:16 -- accel/accel.sh@20 -- # IFS=: 00:06:18.473 14:07:16 -- accel/accel.sh@20 -- # read -r var val 00:06:18.473 14:07:16 -- accel/accel.sh@21 -- # val=32 00:06:18.473 14:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.473 14:07:16 -- accel/accel.sh@20 -- # IFS=: 00:06:18.473 14:07:16 -- accel/accel.sh@20 -- # read -r var val 00:06:18.473 14:07:16 -- accel/accel.sh@21 -- # val=32 00:06:18.473 14:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.473 14:07:16 -- accel/accel.sh@20 -- # IFS=: 00:06:18.473 14:07:16 -- accel/accel.sh@20 -- # read -r var val 00:06:18.473 14:07:16 -- accel/accel.sh@21 -- # val=1 00:06:18.473 14:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.473 14:07:16 -- accel/accel.sh@20 -- # IFS=: 00:06:18.473 14:07:16 -- accel/accel.sh@20 -- # read -r var val 00:06:18.473 14:07:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:18.473 14:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.473 14:07:16 -- accel/accel.sh@20 -- # IFS=: 00:06:18.473 14:07:16 -- accel/accel.sh@20 -- # read -r var val 00:06:18.473 14:07:16 -- accel/accel.sh@21 -- # val=Yes 00:06:18.473 14:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.473 14:07:16 -- accel/accel.sh@20 -- # IFS=: 00:06:18.473 14:07:16 -- accel/accel.sh@20 -- # read -r var val 00:06:18.473 14:07:16 -- accel/accel.sh@21 -- # val= 00:06:18.473 14:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.473 14:07:16 -- accel/accel.sh@20 -- # IFS=: 00:06:18.473 14:07:16 -- accel/accel.sh@20 -- # read -r var val 00:06:18.473 14:07:16 -- accel/accel.sh@21 -- # val= 00:06:18.473 14:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.473 14:07:16 -- accel/accel.sh@20 -- # IFS=: 00:06:18.473 14:07:16 -- accel/accel.sh@20 -- # read -r var val 00:06:20.371 14:07:18 -- accel/accel.sh@21 -- # val= 00:06:20.371 14:07:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.371 14:07:18 -- accel/accel.sh@20 -- # IFS=: 00:06:20.371 14:07:18 -- accel/accel.sh@20 -- # read -r var val 00:06:20.371 14:07:18 -- accel/accel.sh@21 -- # val= 00:06:20.371 14:07:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.371 14:07:18 -- accel/accel.sh@20 -- # IFS=: 00:06:20.371 14:07:18 -- accel/accel.sh@20 -- # read -r var val 00:06:20.371 14:07:18 -- accel/accel.sh@21 -- # val= 00:06:20.371 14:07:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.371 14:07:18 -- accel/accel.sh@20 -- # IFS=: 00:06:20.371 14:07:18 -- accel/accel.sh@20 -- # read -r var val 00:06:20.371 14:07:18 -- accel/accel.sh@21 -- # val= 00:06:20.371 14:07:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.371 14:07:18 -- accel/accel.sh@20 -- # IFS=: 00:06:20.371 14:07:18 -- accel/accel.sh@20 -- # read -r var val 00:06:20.371 14:07:18 -- accel/accel.sh@21 -- # val= 00:06:20.371 14:07:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.371 14:07:18 -- accel/accel.sh@20 -- # IFS=: 00:06:20.371 14:07:18 -- accel/accel.sh@20 -- # read -r var val 00:06:20.371 14:07:18 -- accel/accel.sh@21 -- # val= 00:06:20.371 14:07:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.371 14:07:18 -- accel/accel.sh@20 -- # IFS=: 00:06:20.371 14:07:18 -- accel/accel.sh@20 -- # read -r var val 00:06:20.371 14:07:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:20.371 ************************************ 00:06:20.371 END TEST accel_xor 00:06:20.371 ************************************ 00:06:20.371 14:07:18 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:20.371 14:07:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.371 00:06:20.371 real 0m3.803s 00:06:20.371 user 0m3.380s 00:06:20.371 sys 0m0.220s 00:06:20.371 14:07:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:20.371 14:07:18 -- common/autotest_common.sh@10 -- # set +x 00:06:20.371 14:07:18 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:20.371 14:07:18 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:20.371 14:07:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.371 14:07:18 -- common/autotest_common.sh@10 -- # set +x 00:06:20.371 ************************************ 00:06:20.371 START TEST accel_dif_verify 00:06:20.371 ************************************ 00:06:20.371 14:07:18 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:06:20.371 14:07:18 -- accel/accel.sh@16 -- # local accel_opc 00:06:20.371 14:07:18 -- accel/accel.sh@17 -- # local accel_module 00:06:20.371 14:07:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:20.371 14:07:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:20.371 14:07:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.371 14:07:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.371 14:07:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.371 14:07:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.371 14:07:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.371 14:07:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.371 14:07:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.371 14:07:18 -- accel/accel.sh@42 -- # jq -r . 00:06:20.371 [2024-11-19 14:07:18.496495] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:20.371 [2024-11-19 14:07:18.496598] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59185 ] 00:06:20.371 [2024-11-19 14:07:18.644899] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.371 [2024-11-19 14:07:18.782219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.271 14:07:20 -- accel/accel.sh@18 -- # out=' 00:06:22.271 SPDK Configuration: 00:06:22.271 Core mask: 0x1 00:06:22.271 00:06:22.271 Accel Perf Configuration: 00:06:22.271 Workload Type: dif_verify 00:06:22.271 Vector size: 4096 bytes 00:06:22.271 Transfer size: 4096 bytes 00:06:22.271 Block size: 512 bytes 00:06:22.271 Metadata size: 8 bytes 00:06:22.271 Vector count 1 00:06:22.271 Module: software 00:06:22.271 Queue depth: 32 00:06:22.271 Allocate depth: 32 00:06:22.271 # threads/core: 1 00:06:22.271 Run time: 1 seconds 00:06:22.271 Verify: No 00:06:22.271 00:06:22.271 Running for 1 seconds... 00:06:22.271 00:06:22.271 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:22.271 ------------------------------------------------------------------------------------ 00:06:22.271 0,0 128000/s 507 MiB/s 0 0 00:06:22.271 ==================================================================================== 00:06:22.271 Total 128000/s 500 MiB/s 0 0' 00:06:22.271 14:07:20 -- accel/accel.sh@20 -- # IFS=: 00:06:22.271 14:07:20 -- accel/accel.sh@20 -- # read -r var val 00:06:22.271 14:07:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:22.271 14:07:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:22.271 14:07:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.271 14:07:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:22.271 14:07:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.271 14:07:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.271 14:07:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:22.271 14:07:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:22.271 14:07:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:22.271 14:07:20 -- accel/accel.sh@42 -- # jq -r . 00:06:22.271 [2024-11-19 14:07:20.395081] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:22.271 [2024-11-19 14:07:20.395308] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59211 ] 00:06:22.271 [2024-11-19 14:07:20.541430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.271 [2024-11-19 14:07:20.678848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.271 14:07:20 -- accel/accel.sh@21 -- # val= 00:06:22.271 14:07:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.271 14:07:20 -- accel/accel.sh@20 -- # IFS=: 00:06:22.271 14:07:20 -- accel/accel.sh@20 -- # read -r var val 00:06:22.271 14:07:20 -- accel/accel.sh@21 -- # val= 00:06:22.271 14:07:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.271 14:07:20 -- accel/accel.sh@20 -- # IFS=: 00:06:22.271 14:07:20 -- accel/accel.sh@20 -- # read -r var val 00:06:22.271 14:07:20 -- accel/accel.sh@21 -- # val=0x1 00:06:22.271 14:07:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.271 14:07:20 -- accel/accel.sh@20 -- # IFS=: 00:06:22.271 14:07:20 -- accel/accel.sh@20 -- # read -r var val 00:06:22.271 14:07:20 -- accel/accel.sh@21 -- # val= 00:06:22.271 14:07:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.271 14:07:20 -- accel/accel.sh@20 -- # IFS=: 00:06:22.271 14:07:20 -- accel/accel.sh@20 -- # read -r var val 00:06:22.271 14:07:20 -- accel/accel.sh@21 -- # val= 00:06:22.271 14:07:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.271 14:07:20 -- accel/accel.sh@20 -- # IFS=: 00:06:22.271 14:07:20 -- accel/accel.sh@20 -- # read -r var val 00:06:22.271 14:07:20 -- accel/accel.sh@21 -- # val=dif_verify 00:06:22.271 14:07:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.271 14:07:20 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:22.271 14:07:20 -- accel/accel.sh@20 -- # IFS=: 00:06:22.271 14:07:20 -- accel/accel.sh@20 -- # read -r var val 00:06:22.271 14:07:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:22.271 14:07:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.271 14:07:20 -- accel/accel.sh@20 -- # IFS=: 00:06:22.271 14:07:20 -- accel/accel.sh@20 -- # read -r var val 00:06:22.271 14:07:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:22.271 14:07:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.271 14:07:20 -- accel/accel.sh@20 -- # IFS=: 00:06:22.271 14:07:20 -- accel/accel.sh@20 -- # read -r var val 00:06:22.271 14:07:20 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:22.271 14:07:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.271 14:07:20 -- accel/accel.sh@20 -- # IFS=: 00:06:22.271 14:07:20 -- accel/accel.sh@20 -- # read -r var val 00:06:22.271 14:07:20 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:22.271 14:07:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.271 14:07:20 -- accel/accel.sh@20 -- # IFS=: 00:06:22.271 14:07:20 -- accel/accel.sh@20 -- # read -r var val 00:06:22.271 14:07:20 -- accel/accel.sh@21 -- # val= 00:06:22.271 14:07:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.271 14:07:20 -- accel/accel.sh@20 -- # IFS=: 00:06:22.271 14:07:20 -- accel/accel.sh@20 -- # read -r var val 00:06:22.271 14:07:20 -- accel/accel.sh@21 -- # val=software 00:06:22.271 14:07:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.271 14:07:20 -- accel/accel.sh@23 -- # accel_module=software 00:06:22.271 14:07:20 -- accel/accel.sh@20 -- # IFS=: 00:06:22.271 14:07:20 -- accel/accel.sh@20 -- # read -r var val 00:06:22.271 14:07:20 -- accel/accel.sh@21 -- # val=32 00:06:22.271 14:07:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.271 14:07:20 -- accel/accel.sh@20 -- # IFS=: 00:06:22.271 14:07:20 -- accel/accel.sh@20 -- # read -r var val 00:06:22.271 14:07:20 -- accel/accel.sh@21 -- # val=32 00:06:22.271 14:07:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.271 14:07:20 -- accel/accel.sh@20 -- # IFS=: 00:06:22.271 14:07:20 -- accel/accel.sh@20 -- # read -r var val 00:06:22.271 14:07:20 -- accel/accel.sh@21 -- # val=1 00:06:22.271 14:07:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.271 14:07:20 -- accel/accel.sh@20 -- # IFS=: 00:06:22.271 14:07:20 -- accel/accel.sh@20 -- # read -r var val 00:06:22.271 14:07:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:22.271 14:07:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.271 14:07:20 -- accel/accel.sh@20 -- # IFS=: 00:06:22.271 14:07:20 -- accel/accel.sh@20 -- # read -r var val 00:06:22.271 14:07:20 -- accel/accel.sh@21 -- # val=No 00:06:22.271 14:07:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.271 14:07:20 -- accel/accel.sh@20 -- # IFS=: 00:06:22.271 14:07:20 -- accel/accel.sh@20 -- # read -r var val 00:06:22.271 14:07:20 -- accel/accel.sh@21 -- # val= 00:06:22.271 14:07:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.271 14:07:20 -- accel/accel.sh@20 -- # IFS=: 00:06:22.271 14:07:20 -- accel/accel.sh@20 -- # read -r var val 00:06:22.271 14:07:20 -- accel/accel.sh@21 -- # val= 00:06:22.272 14:07:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.272 14:07:20 -- accel/accel.sh@20 -- # IFS=: 00:06:22.272 14:07:20 -- accel/accel.sh@20 -- # read -r var val 00:06:24.173 14:07:22 -- accel/accel.sh@21 -- # val= 00:06:24.173 14:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.173 14:07:22 -- accel/accel.sh@20 -- # IFS=: 00:06:24.173 14:07:22 -- accel/accel.sh@20 -- # read -r var val 00:06:24.173 14:07:22 -- accel/accel.sh@21 -- # val= 00:06:24.173 14:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.173 14:07:22 -- accel/accel.sh@20 -- # IFS=: 00:06:24.173 14:07:22 -- accel/accel.sh@20 -- # read -r var val 00:06:24.173 14:07:22 -- accel/accel.sh@21 -- # val= 00:06:24.173 14:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.173 14:07:22 -- accel/accel.sh@20 -- # IFS=: 00:06:24.173 14:07:22 -- accel/accel.sh@20 -- # read -r var val 00:06:24.173 14:07:22 -- accel/accel.sh@21 -- # val= 00:06:24.173 14:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.173 14:07:22 -- accel/accel.sh@20 -- # IFS=: 00:06:24.173 14:07:22 -- accel/accel.sh@20 -- # read -r var val 00:06:24.173 14:07:22 -- accel/accel.sh@21 -- # val= 00:06:24.173 14:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.173 14:07:22 -- accel/accel.sh@20 -- # IFS=: 00:06:24.173 14:07:22 -- accel/accel.sh@20 -- # read -r var val 00:06:24.173 14:07:22 -- accel/accel.sh@21 -- # val= 00:06:24.173 14:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.173 14:07:22 -- accel/accel.sh@20 -- # IFS=: 00:06:24.173 14:07:22 -- accel/accel.sh@20 -- # read -r var val 00:06:24.173 14:07:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:24.173 14:07:22 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:24.173 14:07:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.173 00:06:24.173 real 0m3.792s 00:06:24.173 user 0m3.374s 00:06:24.173 sys 0m0.217s 00:06:24.173 14:07:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:24.173 14:07:22 -- common/autotest_common.sh@10 -- # set +x 00:06:24.173 ************************************ 00:06:24.173 END TEST accel_dif_verify 00:06:24.173 ************************************ 00:06:24.173 14:07:22 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:24.173 14:07:22 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:24.173 14:07:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.173 14:07:22 -- common/autotest_common.sh@10 -- # set +x 00:06:24.173 ************************************ 00:06:24.173 START TEST accel_dif_generate 00:06:24.173 ************************************ 00:06:24.173 14:07:22 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:06:24.173 14:07:22 -- accel/accel.sh@16 -- # local accel_opc 00:06:24.173 14:07:22 -- accel/accel.sh@17 -- # local accel_module 00:06:24.173 14:07:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:24.173 14:07:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:24.173 14:07:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.173 14:07:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.173 14:07:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.173 14:07:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.173 14:07:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.173 14:07:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.173 14:07:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.173 14:07:22 -- accel/accel.sh@42 -- # jq -r . 00:06:24.173 [2024-11-19 14:07:22.322572] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:24.173 [2024-11-19 14:07:22.322796] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59246 ] 00:06:24.173 [2024-11-19 14:07:22.469592] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.173 [2024-11-19 14:07:22.606543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.123 14:07:24 -- accel/accel.sh@18 -- # out=' 00:06:26.123 SPDK Configuration: 00:06:26.123 Core mask: 0x1 00:06:26.123 00:06:26.123 Accel Perf Configuration: 00:06:26.123 Workload Type: dif_generate 00:06:26.123 Vector size: 4096 bytes 00:06:26.123 Transfer size: 4096 bytes 00:06:26.123 Block size: 512 bytes 00:06:26.123 Metadata size: 8 bytes 00:06:26.123 Vector count 1 00:06:26.123 Module: software 00:06:26.123 Queue depth: 32 00:06:26.123 Allocate depth: 32 00:06:26.123 # threads/core: 1 00:06:26.123 Run time: 1 seconds 00:06:26.123 Verify: No 00:06:26.123 00:06:26.123 Running for 1 seconds... 00:06:26.123 00:06:26.123 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:26.123 ------------------------------------------------------------------------------------ 00:06:26.123 0,0 155264/s 615 MiB/s 0 0 00:06:26.123 ==================================================================================== 00:06:26.123 Total 155264/s 606 MiB/s 0 0' 00:06:26.123 14:07:24 -- accel/accel.sh@20 -- # IFS=: 00:06:26.123 14:07:24 -- accel/accel.sh@20 -- # read -r var val 00:06:26.123 14:07:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:26.123 14:07:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:26.123 14:07:24 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.123 14:07:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.123 14:07:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.123 14:07:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.123 14:07:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.123 14:07:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.123 14:07:24 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.123 14:07:24 -- accel/accel.sh@42 -- # jq -r . 00:06:26.123 [2024-11-19 14:07:24.224908] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:26.123 [2024-11-19 14:07:24.225012] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59267 ] 00:06:26.123 [2024-11-19 14:07:24.372308] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.123 [2024-11-19 14:07:24.510595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.123 14:07:24 -- accel/accel.sh@21 -- # val= 00:06:26.123 14:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.123 14:07:24 -- accel/accel.sh@20 -- # IFS=: 00:06:26.123 14:07:24 -- accel/accel.sh@20 -- # read -r var val 00:06:26.123 14:07:24 -- accel/accel.sh@21 -- # val= 00:06:26.123 14:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.123 14:07:24 -- accel/accel.sh@20 -- # IFS=: 00:06:26.123 14:07:24 -- accel/accel.sh@20 -- # read -r var val 00:06:26.123 14:07:24 -- accel/accel.sh@21 -- # val=0x1 00:06:26.123 14:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.123 14:07:24 -- accel/accel.sh@20 -- # IFS=: 00:06:26.123 14:07:24 -- accel/accel.sh@20 -- # read -r var val 00:06:26.123 14:07:24 -- accel/accel.sh@21 -- # val= 00:06:26.124 14:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.124 14:07:24 -- accel/accel.sh@20 -- # IFS=: 00:06:26.124 14:07:24 -- accel/accel.sh@20 -- # read -r var val 00:06:26.124 14:07:24 -- accel/accel.sh@21 -- # val= 00:06:26.124 14:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.124 14:07:24 -- accel/accel.sh@20 -- # IFS=: 00:06:26.124 14:07:24 -- accel/accel.sh@20 -- # read -r var val 00:06:26.124 14:07:24 -- accel/accel.sh@21 -- # val=dif_generate 00:06:26.124 14:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.124 14:07:24 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:26.124 14:07:24 -- accel/accel.sh@20 -- # IFS=: 00:06:26.124 14:07:24 -- accel/accel.sh@20 -- # read -r var val 00:06:26.124 14:07:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:26.124 14:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.124 14:07:24 -- accel/accel.sh@20 -- # IFS=: 00:06:26.124 14:07:24 -- accel/accel.sh@20 -- # read -r var val 00:06:26.124 14:07:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:26.124 14:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.124 14:07:24 -- accel/accel.sh@20 -- # IFS=: 00:06:26.124 14:07:24 -- accel/accel.sh@20 -- # read -r var val 00:06:26.124 14:07:24 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:26.124 14:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.124 14:07:24 -- accel/accel.sh@20 -- # IFS=: 00:06:26.124 14:07:24 -- accel/accel.sh@20 -- # read -r var val 00:06:26.124 14:07:24 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:26.124 14:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.124 14:07:24 -- accel/accel.sh@20 -- # IFS=: 00:06:26.124 14:07:24 -- accel/accel.sh@20 -- # read -r var val 00:06:26.124 14:07:24 -- accel/accel.sh@21 -- # val= 00:06:26.124 14:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.124 14:07:24 -- accel/accel.sh@20 -- # IFS=: 00:06:26.124 14:07:24 -- accel/accel.sh@20 -- # read -r var val 00:06:26.124 14:07:24 -- accel/accel.sh@21 -- # val=software 00:06:26.124 14:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.124 14:07:24 -- accel/accel.sh@23 -- # accel_module=software 00:06:26.124 14:07:24 -- accel/accel.sh@20 -- # IFS=: 00:06:26.124 14:07:24 -- accel/accel.sh@20 -- # read -r var val 00:06:26.124 14:07:24 -- accel/accel.sh@21 -- # val=32 00:06:26.124 14:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.124 14:07:24 -- accel/accel.sh@20 -- # IFS=: 00:06:26.124 14:07:24 -- accel/accel.sh@20 -- # read -r var val 00:06:26.124 14:07:24 -- accel/accel.sh@21 -- # val=32 00:06:26.124 14:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.124 14:07:24 -- accel/accel.sh@20 -- # IFS=: 00:06:26.124 14:07:24 -- accel/accel.sh@20 -- # read -r var val 00:06:26.124 14:07:24 -- accel/accel.sh@21 -- # val=1 00:06:26.124 14:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.124 14:07:24 -- accel/accel.sh@20 -- # IFS=: 00:06:26.124 14:07:24 -- accel/accel.sh@20 -- # read -r var val 00:06:26.124 14:07:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:26.124 14:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.124 14:07:24 -- accel/accel.sh@20 -- # IFS=: 00:06:26.124 14:07:24 -- accel/accel.sh@20 -- # read -r var val 00:06:26.124 14:07:24 -- accel/accel.sh@21 -- # val=No 00:06:26.124 14:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.124 14:07:24 -- accel/accel.sh@20 -- # IFS=: 00:06:26.124 14:07:24 -- accel/accel.sh@20 -- # read -r var val 00:06:26.124 14:07:24 -- accel/accel.sh@21 -- # val= 00:06:26.124 14:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.124 14:07:24 -- accel/accel.sh@20 -- # IFS=: 00:06:26.124 14:07:24 -- accel/accel.sh@20 -- # read -r var val 00:06:26.124 14:07:24 -- accel/accel.sh@21 -- # val= 00:06:26.124 14:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.124 14:07:24 -- accel/accel.sh@20 -- # IFS=: 00:06:26.124 14:07:24 -- accel/accel.sh@20 -- # read -r var val 00:06:28.027 14:07:26 -- accel/accel.sh@21 -- # val= 00:06:28.027 14:07:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.027 14:07:26 -- accel/accel.sh@20 -- # IFS=: 00:06:28.027 14:07:26 -- accel/accel.sh@20 -- # read -r var val 00:06:28.027 14:07:26 -- accel/accel.sh@21 -- # val= 00:06:28.027 14:07:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.027 14:07:26 -- accel/accel.sh@20 -- # IFS=: 00:06:28.027 14:07:26 -- accel/accel.sh@20 -- # read -r var val 00:06:28.027 14:07:26 -- accel/accel.sh@21 -- # val= 00:06:28.027 14:07:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.027 14:07:26 -- accel/accel.sh@20 -- # IFS=: 00:06:28.027 14:07:26 -- accel/accel.sh@20 -- # read -r var val 00:06:28.027 14:07:26 -- accel/accel.sh@21 -- # val= 00:06:28.027 14:07:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.027 14:07:26 -- accel/accel.sh@20 -- # IFS=: 00:06:28.027 14:07:26 -- accel/accel.sh@20 -- # read -r var val 00:06:28.027 14:07:26 -- accel/accel.sh@21 -- # val= 00:06:28.027 14:07:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.027 14:07:26 -- accel/accel.sh@20 -- # IFS=: 00:06:28.027 14:07:26 -- accel/accel.sh@20 -- # read -r var val 00:06:28.027 14:07:26 -- accel/accel.sh@21 -- # val= 00:06:28.027 14:07:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.027 14:07:26 -- accel/accel.sh@20 -- # IFS=: 00:06:28.027 14:07:26 -- accel/accel.sh@20 -- # read -r var val 00:06:28.027 14:07:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:28.027 14:07:26 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:28.027 14:07:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.027 00:06:28.027 real 0m3.805s 00:06:28.027 user 0m3.380s 00:06:28.027 sys 0m0.223s 00:06:28.027 14:07:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:28.027 14:07:26 -- common/autotest_common.sh@10 -- # set +x 00:06:28.027 ************************************ 00:06:28.027 END TEST accel_dif_generate 00:06:28.027 ************************************ 00:06:28.027 14:07:26 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:28.027 14:07:26 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:28.027 14:07:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.027 14:07:26 -- common/autotest_common.sh@10 -- # set +x 00:06:28.027 ************************************ 00:06:28.027 START TEST accel_dif_generate_copy 00:06:28.027 ************************************ 00:06:28.027 14:07:26 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:06:28.027 14:07:26 -- accel/accel.sh@16 -- # local accel_opc 00:06:28.027 14:07:26 -- accel/accel.sh@17 -- # local accel_module 00:06:28.027 14:07:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:28.027 14:07:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.027 14:07:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:28.027 14:07:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:28.027 14:07:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.027 14:07:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.027 14:07:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:28.027 14:07:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:28.027 14:07:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:28.027 14:07:26 -- accel/accel.sh@42 -- # jq -r . 00:06:28.027 [2024-11-19 14:07:26.169346] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:28.027 [2024-11-19 14:07:26.169448] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59308 ] 00:06:28.027 [2024-11-19 14:07:26.317219] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.027 [2024-11-19 14:07:26.460022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.927 14:07:28 -- accel/accel.sh@18 -- # out=' 00:06:29.927 SPDK Configuration: 00:06:29.927 Core mask: 0x1 00:06:29.927 00:06:29.927 Accel Perf Configuration: 00:06:29.927 Workload Type: dif_generate_copy 00:06:29.927 Vector size: 4096 bytes 00:06:29.927 Transfer size: 4096 bytes 00:06:29.927 Vector count 1 00:06:29.927 Module: software 00:06:29.927 Queue depth: 32 00:06:29.927 Allocate depth: 32 00:06:29.927 # threads/core: 1 00:06:29.927 Run time: 1 seconds 00:06:29.927 Verify: No 00:06:29.927 00:06:29.928 Running for 1 seconds... 00:06:29.928 00:06:29.928 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:29.928 ------------------------------------------------------------------------------------ 00:06:29.928 0,0 118688/s 470 MiB/s 0 0 00:06:29.928 ==================================================================================== 00:06:29.928 Total 118688/s 463 MiB/s 0 0' 00:06:29.928 14:07:28 -- accel/accel.sh@20 -- # IFS=: 00:06:29.928 14:07:28 -- accel/accel.sh@20 -- # read -r var val 00:06:29.928 14:07:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:29.928 14:07:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:29.928 14:07:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.928 14:07:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.928 14:07:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.928 14:07:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.928 14:07:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.928 14:07:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.928 14:07:28 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.928 14:07:28 -- accel/accel.sh@42 -- # jq -r . 00:06:29.928 [2024-11-19 14:07:28.080441] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:29.928 [2024-11-19 14:07:28.080646] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59334 ] 00:06:29.928 [2024-11-19 14:07:28.229614] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.928 [2024-11-19 14:07:28.368041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.928 14:07:28 -- accel/accel.sh@21 -- # val= 00:06:29.928 14:07:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.928 14:07:28 -- accel/accel.sh@20 -- # IFS=: 00:06:29.928 14:07:28 -- accel/accel.sh@20 -- # read -r var val 00:06:29.928 14:07:28 -- accel/accel.sh@21 -- # val= 00:06:29.928 14:07:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.928 14:07:28 -- accel/accel.sh@20 -- # IFS=: 00:06:29.928 14:07:28 -- accel/accel.sh@20 -- # read -r var val 00:06:29.928 14:07:28 -- accel/accel.sh@21 -- # val=0x1 00:06:29.928 14:07:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.928 14:07:28 -- accel/accel.sh@20 -- # IFS=: 00:06:29.928 14:07:28 -- accel/accel.sh@20 -- # read -r var val 00:06:29.928 14:07:28 -- accel/accel.sh@21 -- # val= 00:06:29.928 14:07:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.928 14:07:28 -- accel/accel.sh@20 -- # IFS=: 00:06:29.928 14:07:28 -- accel/accel.sh@20 -- # read -r var val 00:06:29.928 14:07:28 -- accel/accel.sh@21 -- # val= 00:06:29.928 14:07:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.928 14:07:28 -- accel/accel.sh@20 -- # IFS=: 00:06:29.928 14:07:28 -- accel/accel.sh@20 -- # read -r var val 00:06:29.928 14:07:28 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:29.928 14:07:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.928 14:07:28 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:29.928 14:07:28 -- accel/accel.sh@20 -- # IFS=: 00:06:29.928 14:07:28 -- accel/accel.sh@20 -- # read -r var val 00:06:29.928 14:07:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:29.928 14:07:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.928 14:07:28 -- accel/accel.sh@20 -- # IFS=: 00:06:29.928 14:07:28 -- accel/accel.sh@20 -- # read -r var val 00:06:29.928 14:07:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:29.928 14:07:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.928 14:07:28 -- accel/accel.sh@20 -- # IFS=: 00:06:29.928 14:07:28 -- accel/accel.sh@20 -- # read -r var val 00:06:29.928 14:07:28 -- accel/accel.sh@21 -- # val= 00:06:29.928 14:07:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.928 14:07:28 -- accel/accel.sh@20 -- # IFS=: 00:06:29.928 14:07:28 -- accel/accel.sh@20 -- # read -r var val 00:06:29.928 14:07:28 -- accel/accel.sh@21 -- # val=software 00:06:29.928 14:07:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.928 14:07:28 -- accel/accel.sh@23 -- # accel_module=software 00:06:29.928 14:07:28 -- accel/accel.sh@20 -- # IFS=: 00:06:29.928 14:07:28 -- accel/accel.sh@20 -- # read -r var val 00:06:29.928 14:07:28 -- accel/accel.sh@21 -- # val=32 00:06:29.928 14:07:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.928 14:07:28 -- accel/accel.sh@20 -- # IFS=: 00:06:29.928 14:07:28 -- accel/accel.sh@20 -- # read -r var val 00:06:29.928 14:07:28 -- accel/accel.sh@21 -- # val=32 00:06:30.187 14:07:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.187 14:07:28 -- accel/accel.sh@20 -- # IFS=: 00:06:30.187 14:07:28 -- accel/accel.sh@20 -- # read -r var val 00:06:30.187 14:07:28 -- accel/accel.sh@21 -- # val=1 00:06:30.187 14:07:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.187 14:07:28 -- accel/accel.sh@20 -- # IFS=: 00:06:30.187 14:07:28 -- accel/accel.sh@20 -- # read -r var val 00:06:30.187 14:07:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:30.187 14:07:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.187 14:07:28 -- accel/accel.sh@20 -- # IFS=: 00:06:30.187 14:07:28 -- accel/accel.sh@20 -- # read -r var val 00:06:30.187 14:07:28 -- accel/accel.sh@21 -- # val=No 00:06:30.187 14:07:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.187 14:07:28 -- accel/accel.sh@20 -- # IFS=: 00:06:30.187 14:07:28 -- accel/accel.sh@20 -- # read -r var val 00:06:30.187 14:07:28 -- accel/accel.sh@21 -- # val= 00:06:30.187 14:07:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.187 14:07:28 -- accel/accel.sh@20 -- # IFS=: 00:06:30.187 14:07:28 -- accel/accel.sh@20 -- # read -r var val 00:06:30.187 14:07:28 -- accel/accel.sh@21 -- # val= 00:06:30.187 14:07:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.187 14:07:28 -- accel/accel.sh@20 -- # IFS=: 00:06:30.187 14:07:28 -- accel/accel.sh@20 -- # read -r var val 00:06:31.567 14:07:29 -- accel/accel.sh@21 -- # val= 00:06:31.567 14:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.567 14:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:31.567 14:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:31.567 14:07:29 -- accel/accel.sh@21 -- # val= 00:06:31.567 14:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.567 14:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:31.567 14:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:31.567 14:07:29 -- accel/accel.sh@21 -- # val= 00:06:31.567 14:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.567 14:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:31.567 14:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:31.567 14:07:29 -- accel/accel.sh@21 -- # val= 00:06:31.567 14:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.567 14:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:31.567 14:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:31.567 14:07:29 -- accel/accel.sh@21 -- # val= 00:06:31.567 14:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.567 14:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:31.567 14:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:31.567 14:07:29 -- accel/accel.sh@21 -- # val= 00:06:31.567 14:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.567 14:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:31.567 14:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:31.567 14:07:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:31.567 14:07:29 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:31.567 14:07:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.567 00:06:31.567 real 0m3.810s 00:06:31.567 user 0m3.370s 00:06:31.567 sys 0m0.236s 00:06:31.567 14:07:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:31.567 ************************************ 00:06:31.567 END TEST accel_dif_generate_copy 00:06:31.567 ************************************ 00:06:31.567 14:07:29 -- common/autotest_common.sh@10 -- # set +x 00:06:31.567 14:07:29 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:31.567 14:07:29 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:31.567 14:07:29 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:31.567 14:07:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.567 14:07:29 -- common/autotest_common.sh@10 -- # set +x 00:06:31.567 ************************************ 00:06:31.567 START TEST accel_comp 00:06:31.567 ************************************ 00:06:31.567 14:07:29 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:31.567 14:07:29 -- accel/accel.sh@16 -- # local accel_opc 00:06:31.567 14:07:29 -- accel/accel.sh@17 -- # local accel_module 00:06:31.567 14:07:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:31.567 14:07:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:31.567 14:07:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.567 14:07:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.567 14:07:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.567 14:07:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.567 14:07:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.567 14:07:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.567 14:07:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.567 14:07:29 -- accel/accel.sh@42 -- # jq -r . 00:06:31.567 [2024-11-19 14:07:30.016511] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:31.567 [2024-11-19 14:07:30.016615] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59375 ] 00:06:31.826 [2024-11-19 14:07:30.167002] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.826 [2024-11-19 14:07:30.338576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.728 14:07:32 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:33.728 00:06:33.728 SPDK Configuration: 00:06:33.728 Core mask: 0x1 00:06:33.728 00:06:33.728 Accel Perf Configuration: 00:06:33.728 Workload Type: compress 00:06:33.728 Transfer size: 4096 bytes 00:06:33.728 Vector count 1 00:06:33.728 Module: software 00:06:33.728 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:33.728 Queue depth: 32 00:06:33.728 Allocate depth: 32 00:06:33.728 # threads/core: 1 00:06:33.728 Run time: 1 seconds 00:06:33.728 Verify: No 00:06:33.728 00:06:33.728 Running for 1 seconds... 00:06:33.728 00:06:33.728 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:33.728 ------------------------------------------------------------------------------------ 00:06:33.728 0,0 49024/s 204 MiB/s 0 0 00:06:33.728 ==================================================================================== 00:06:33.728 Total 49024/s 191 MiB/s 0 0' 00:06:33.728 14:07:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:33.728 14:07:32 -- accel/accel.sh@20 -- # IFS=: 00:06:33.728 14:07:32 -- accel/accel.sh@20 -- # read -r var val 00:06:33.728 14:07:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:33.728 14:07:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.728 14:07:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:33.728 14:07:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.728 14:07:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.728 14:07:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:33.728 14:07:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:33.728 14:07:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:33.728 14:07:32 -- accel/accel.sh@42 -- # jq -r . 00:06:33.728 [2024-11-19 14:07:32.119290] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:33.728 [2024-11-19 14:07:32.119399] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59401 ] 00:06:33.728 [2024-11-19 14:07:32.267867] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.986 [2024-11-19 14:07:32.449797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.245 14:07:32 -- accel/accel.sh@21 -- # val= 00:06:34.245 14:07:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.245 14:07:32 -- accel/accel.sh@20 -- # IFS=: 00:06:34.245 14:07:32 -- accel/accel.sh@20 -- # read -r var val 00:06:34.245 14:07:32 -- accel/accel.sh@21 -- # val= 00:06:34.245 14:07:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.245 14:07:32 -- accel/accel.sh@20 -- # IFS=: 00:06:34.245 14:07:32 -- accel/accel.sh@20 -- # read -r var val 00:06:34.245 14:07:32 -- accel/accel.sh@21 -- # val= 00:06:34.245 14:07:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.245 14:07:32 -- accel/accel.sh@20 -- # IFS=: 00:06:34.245 14:07:32 -- accel/accel.sh@20 -- # read -r var val 00:06:34.245 14:07:32 -- accel/accel.sh@21 -- # val=0x1 00:06:34.245 14:07:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.245 14:07:32 -- accel/accel.sh@20 -- # IFS=: 00:06:34.245 14:07:32 -- accel/accel.sh@20 -- # read -r var val 00:06:34.245 14:07:32 -- accel/accel.sh@21 -- # val= 00:06:34.245 14:07:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.245 14:07:32 -- accel/accel.sh@20 -- # IFS=: 00:06:34.245 14:07:32 -- accel/accel.sh@20 -- # read -r var val 00:06:34.245 14:07:32 -- accel/accel.sh@21 -- # val= 00:06:34.245 14:07:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.245 14:07:32 -- accel/accel.sh@20 -- # IFS=: 00:06:34.245 14:07:32 -- accel/accel.sh@20 -- # read -r var val 00:06:34.245 14:07:32 -- accel/accel.sh@21 -- # val=compress 00:06:34.245 14:07:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.245 14:07:32 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:34.245 14:07:32 -- accel/accel.sh@20 -- # IFS=: 00:06:34.245 14:07:32 -- accel/accel.sh@20 -- # read -r var val 00:06:34.245 14:07:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:34.245 14:07:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.245 14:07:32 -- accel/accel.sh@20 -- # IFS=: 00:06:34.245 14:07:32 -- accel/accel.sh@20 -- # read -r var val 00:06:34.245 14:07:32 -- accel/accel.sh@21 -- # val= 00:06:34.245 14:07:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.245 14:07:32 -- accel/accel.sh@20 -- # IFS=: 00:06:34.245 14:07:32 -- accel/accel.sh@20 -- # read -r var val 00:06:34.245 14:07:32 -- accel/accel.sh@21 -- # val=software 00:06:34.245 14:07:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.245 14:07:32 -- accel/accel.sh@23 -- # accel_module=software 00:06:34.245 14:07:32 -- accel/accel.sh@20 -- # IFS=: 00:06:34.245 14:07:32 -- accel/accel.sh@20 -- # read -r var val 00:06:34.245 14:07:32 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:34.245 14:07:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.245 14:07:32 -- accel/accel.sh@20 -- # IFS=: 00:06:34.245 14:07:32 -- accel/accel.sh@20 -- # read -r var val 00:06:34.245 14:07:32 -- accel/accel.sh@21 -- # val=32 00:06:34.245 14:07:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.245 14:07:32 -- accel/accel.sh@20 -- # IFS=: 00:06:34.245 14:07:32 -- accel/accel.sh@20 -- # read -r var val 00:06:34.245 14:07:32 -- accel/accel.sh@21 -- # val=32 00:06:34.245 14:07:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.245 14:07:32 -- accel/accel.sh@20 -- # IFS=: 00:06:34.245 14:07:32 -- accel/accel.sh@20 -- # read -r var val 00:06:34.245 14:07:32 -- accel/accel.sh@21 -- # val=1 00:06:34.245 14:07:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.245 14:07:32 -- accel/accel.sh@20 -- # IFS=: 00:06:34.245 14:07:32 -- accel/accel.sh@20 -- # read -r var val 00:06:34.245 14:07:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:34.245 14:07:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.245 14:07:32 -- accel/accel.sh@20 -- # IFS=: 00:06:34.245 14:07:32 -- accel/accel.sh@20 -- # read -r var val 00:06:34.245 14:07:32 -- accel/accel.sh@21 -- # val=No 00:06:34.245 14:07:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.245 14:07:32 -- accel/accel.sh@20 -- # IFS=: 00:06:34.245 14:07:32 -- accel/accel.sh@20 -- # read -r var val 00:06:34.245 14:07:32 -- accel/accel.sh@21 -- # val= 00:06:34.245 14:07:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.245 14:07:32 -- accel/accel.sh@20 -- # IFS=: 00:06:34.245 14:07:32 -- accel/accel.sh@20 -- # read -r var val 00:06:34.245 14:07:32 -- accel/accel.sh@21 -- # val= 00:06:34.245 14:07:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.245 14:07:32 -- accel/accel.sh@20 -- # IFS=: 00:06:34.245 14:07:32 -- accel/accel.sh@20 -- # read -r var val 00:06:35.621 14:07:34 -- accel/accel.sh@21 -- # val= 00:06:35.621 14:07:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.621 14:07:34 -- accel/accel.sh@20 -- # IFS=: 00:06:35.621 14:07:34 -- accel/accel.sh@20 -- # read -r var val 00:06:35.621 14:07:34 -- accel/accel.sh@21 -- # val= 00:06:35.621 14:07:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.621 14:07:34 -- accel/accel.sh@20 -- # IFS=: 00:06:35.621 14:07:34 -- accel/accel.sh@20 -- # read -r var val 00:06:35.621 14:07:34 -- accel/accel.sh@21 -- # val= 00:06:35.621 14:07:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.621 14:07:34 -- accel/accel.sh@20 -- # IFS=: 00:06:35.621 14:07:34 -- accel/accel.sh@20 -- # read -r var val 00:06:35.621 14:07:34 -- accel/accel.sh@21 -- # val= 00:06:35.621 14:07:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.621 14:07:34 -- accel/accel.sh@20 -- # IFS=: 00:06:35.621 14:07:34 -- accel/accel.sh@20 -- # read -r var val 00:06:35.621 14:07:34 -- accel/accel.sh@21 -- # val= 00:06:35.621 14:07:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.621 14:07:34 -- accel/accel.sh@20 -- # IFS=: 00:06:35.621 14:07:34 -- accel/accel.sh@20 -- # read -r var val 00:06:35.621 14:07:34 -- accel/accel.sh@21 -- # val= 00:06:35.621 14:07:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.621 14:07:34 -- accel/accel.sh@20 -- # IFS=: 00:06:35.621 14:07:34 -- accel/accel.sh@20 -- # read -r var val 00:06:35.621 14:07:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:35.621 ************************************ 00:06:35.621 END TEST accel_comp 00:06:35.621 ************************************ 00:06:35.621 14:07:34 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:35.621 14:07:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.621 00:06:35.621 real 0m4.102s 00:06:35.621 user 0m3.636s 00:06:35.621 sys 0m0.259s 00:06:35.621 14:07:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:35.621 14:07:34 -- common/autotest_common.sh@10 -- # set +x 00:06:35.621 14:07:34 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:35.621 14:07:34 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:35.621 14:07:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.621 14:07:34 -- common/autotest_common.sh@10 -- # set +x 00:06:35.621 ************************************ 00:06:35.621 START TEST accel_decomp 00:06:35.621 ************************************ 00:06:35.621 14:07:34 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:35.621 14:07:34 -- accel/accel.sh@16 -- # local accel_opc 00:06:35.621 14:07:34 -- accel/accel.sh@17 -- # local accel_module 00:06:35.621 14:07:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:35.621 14:07:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:35.621 14:07:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.621 14:07:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.621 14:07:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.621 14:07:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.621 14:07:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.621 14:07:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.621 14:07:34 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.622 14:07:34 -- accel/accel.sh@42 -- # jq -r . 00:06:35.622 [2024-11-19 14:07:34.165205] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:35.622 [2024-11-19 14:07:34.166325] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59442 ] 00:06:35.880 [2024-11-19 14:07:34.321291] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.138 [2024-11-19 14:07:34.472740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.514 14:07:36 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:37.514 00:06:37.514 SPDK Configuration: 00:06:37.514 Core mask: 0x1 00:06:37.514 00:06:37.514 Accel Perf Configuration: 00:06:37.514 Workload Type: decompress 00:06:37.514 Transfer size: 4096 bytes 00:06:37.514 Vector count 1 00:06:37.514 Module: software 00:06:37.514 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:37.514 Queue depth: 32 00:06:37.514 Allocate depth: 32 00:06:37.514 # threads/core: 1 00:06:37.514 Run time: 1 seconds 00:06:37.514 Verify: Yes 00:06:37.514 00:06:37.514 Running for 1 seconds... 00:06:37.514 00:06:37.514 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:37.514 ------------------------------------------------------------------------------------ 00:06:37.514 0,0 76768/s 141 MiB/s 0 0 00:06:37.514 ==================================================================================== 00:06:37.514 Total 76768/s 299 MiB/s 0 0' 00:06:37.514 14:07:36 -- accel/accel.sh@20 -- # IFS=: 00:06:37.514 14:07:36 -- accel/accel.sh@20 -- # read -r var val 00:06:37.514 14:07:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:37.514 14:07:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:37.514 14:07:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.514 14:07:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.514 14:07:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.514 14:07:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.514 14:07:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.514 14:07:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.514 14:07:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.514 14:07:36 -- accel/accel.sh@42 -- # jq -r . 00:06:37.775 [2024-11-19 14:07:36.105701] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:37.775 [2024-11-19 14:07:36.105803] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59468 ] 00:06:37.775 [2024-11-19 14:07:36.253260] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.038 [2024-11-19 14:07:36.431276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.038 14:07:36 -- accel/accel.sh@21 -- # val= 00:06:38.038 14:07:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.038 14:07:36 -- accel/accel.sh@20 -- # IFS=: 00:06:38.038 14:07:36 -- accel/accel.sh@20 -- # read -r var val 00:06:38.038 14:07:36 -- accel/accel.sh@21 -- # val= 00:06:38.038 14:07:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.038 14:07:36 -- accel/accel.sh@20 -- # IFS=: 00:06:38.038 14:07:36 -- accel/accel.sh@20 -- # read -r var val 00:06:38.038 14:07:36 -- accel/accel.sh@21 -- # val= 00:06:38.038 14:07:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.038 14:07:36 -- accel/accel.sh@20 -- # IFS=: 00:06:38.038 14:07:36 -- accel/accel.sh@20 -- # read -r var val 00:06:38.038 14:07:36 -- accel/accel.sh@21 -- # val=0x1 00:06:38.038 14:07:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.038 14:07:36 -- accel/accel.sh@20 -- # IFS=: 00:06:38.038 14:07:36 -- accel/accel.sh@20 -- # read -r var val 00:06:38.038 14:07:36 -- accel/accel.sh@21 -- # val= 00:06:38.038 14:07:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.038 14:07:36 -- accel/accel.sh@20 -- # IFS=: 00:06:38.296 14:07:36 -- accel/accel.sh@20 -- # read -r var val 00:06:38.296 14:07:36 -- accel/accel.sh@21 -- # val= 00:06:38.296 14:07:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.296 14:07:36 -- accel/accel.sh@20 -- # IFS=: 00:06:38.296 14:07:36 -- accel/accel.sh@20 -- # read -r var val 00:06:38.296 14:07:36 -- accel/accel.sh@21 -- # val=decompress 00:06:38.296 14:07:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.296 14:07:36 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:38.296 14:07:36 -- accel/accel.sh@20 -- # IFS=: 00:06:38.296 14:07:36 -- accel/accel.sh@20 -- # read -r var val 00:06:38.296 14:07:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:38.296 14:07:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.296 14:07:36 -- accel/accel.sh@20 -- # IFS=: 00:06:38.296 14:07:36 -- accel/accel.sh@20 -- # read -r var val 00:06:38.296 14:07:36 -- accel/accel.sh@21 -- # val= 00:06:38.296 14:07:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.296 14:07:36 -- accel/accel.sh@20 -- # IFS=: 00:06:38.296 14:07:36 -- accel/accel.sh@20 -- # read -r var val 00:06:38.296 14:07:36 -- accel/accel.sh@21 -- # val=software 00:06:38.296 14:07:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.296 14:07:36 -- accel/accel.sh@23 -- # accel_module=software 00:06:38.296 14:07:36 -- accel/accel.sh@20 -- # IFS=: 00:06:38.296 14:07:36 -- accel/accel.sh@20 -- # read -r var val 00:06:38.296 14:07:36 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:38.296 14:07:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.296 14:07:36 -- accel/accel.sh@20 -- # IFS=: 00:06:38.296 14:07:36 -- accel/accel.sh@20 -- # read -r var val 00:06:38.296 14:07:36 -- accel/accel.sh@21 -- # val=32 00:06:38.296 14:07:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.296 14:07:36 -- accel/accel.sh@20 -- # IFS=: 00:06:38.296 14:07:36 -- accel/accel.sh@20 -- # read -r var val 00:06:38.296 14:07:36 -- accel/accel.sh@21 -- # val=32 00:06:38.296 14:07:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.296 14:07:36 -- accel/accel.sh@20 -- # IFS=: 00:06:38.296 14:07:36 -- accel/accel.sh@20 -- # read -r var val 00:06:38.296 14:07:36 -- accel/accel.sh@21 -- # val=1 00:06:38.296 14:07:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.296 14:07:36 -- accel/accel.sh@20 -- # IFS=: 00:06:38.296 14:07:36 -- accel/accel.sh@20 -- # read -r var val 00:06:38.296 14:07:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:38.296 14:07:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.296 14:07:36 -- accel/accel.sh@20 -- # IFS=: 00:06:38.296 14:07:36 -- accel/accel.sh@20 -- # read -r var val 00:06:38.296 14:07:36 -- accel/accel.sh@21 -- # val=Yes 00:06:38.296 14:07:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.296 14:07:36 -- accel/accel.sh@20 -- # IFS=: 00:06:38.296 14:07:36 -- accel/accel.sh@20 -- # read -r var val 00:06:38.296 14:07:36 -- accel/accel.sh@21 -- # val= 00:06:38.296 14:07:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.296 14:07:36 -- accel/accel.sh@20 -- # IFS=: 00:06:38.296 14:07:36 -- accel/accel.sh@20 -- # read -r var val 00:06:38.296 14:07:36 -- accel/accel.sh@21 -- # val= 00:06:38.296 14:07:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.296 14:07:36 -- accel/accel.sh@20 -- # IFS=: 00:06:38.296 14:07:36 -- accel/accel.sh@20 -- # read -r var val 00:06:39.670 14:07:38 -- accel/accel.sh@21 -- # val= 00:06:39.670 14:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.670 14:07:38 -- accel/accel.sh@20 -- # IFS=: 00:06:39.670 14:07:38 -- accel/accel.sh@20 -- # read -r var val 00:06:39.670 14:07:38 -- accel/accel.sh@21 -- # val= 00:06:39.670 14:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.670 14:07:38 -- accel/accel.sh@20 -- # IFS=: 00:06:39.670 14:07:38 -- accel/accel.sh@20 -- # read -r var val 00:06:39.670 14:07:38 -- accel/accel.sh@21 -- # val= 00:06:39.670 14:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.670 14:07:38 -- accel/accel.sh@20 -- # IFS=: 00:06:39.670 14:07:38 -- accel/accel.sh@20 -- # read -r var val 00:06:39.670 14:07:38 -- accel/accel.sh@21 -- # val= 00:06:39.670 14:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.670 14:07:38 -- accel/accel.sh@20 -- # IFS=: 00:06:39.670 14:07:38 -- accel/accel.sh@20 -- # read -r var val 00:06:39.670 14:07:38 -- accel/accel.sh@21 -- # val= 00:06:39.670 14:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.670 14:07:38 -- accel/accel.sh@20 -- # IFS=: 00:06:39.670 14:07:38 -- accel/accel.sh@20 -- # read -r var val 00:06:39.670 14:07:38 -- accel/accel.sh@21 -- # val= 00:06:39.670 14:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.670 14:07:38 -- accel/accel.sh@20 -- # IFS=: 00:06:39.670 14:07:38 -- accel/accel.sh@20 -- # read -r var val 00:06:39.670 ************************************ 00:06:39.670 END TEST accel_decomp 00:06:39.670 ************************************ 00:06:39.670 14:07:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:39.670 14:07:38 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:39.670 14:07:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.670 00:06:39.670 real 0m4.086s 00:06:39.670 user 0m1.727s 00:06:39.670 sys 0m0.120s 00:06:39.670 14:07:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:39.670 14:07:38 -- common/autotest_common.sh@10 -- # set +x 00:06:39.928 14:07:38 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:39.928 14:07:38 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:39.928 14:07:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.928 14:07:38 -- common/autotest_common.sh@10 -- # set +x 00:06:39.928 ************************************ 00:06:39.928 START TEST accel_decmop_full 00:06:39.928 ************************************ 00:06:39.928 14:07:38 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:39.928 14:07:38 -- accel/accel.sh@16 -- # local accel_opc 00:06:39.928 14:07:38 -- accel/accel.sh@17 -- # local accel_module 00:06:39.928 14:07:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:39.928 14:07:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:39.928 14:07:38 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.928 14:07:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:39.928 14:07:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.928 14:07:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.928 14:07:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:39.928 14:07:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:39.928 14:07:38 -- accel/accel.sh@41 -- # local IFS=, 00:06:39.928 14:07:38 -- accel/accel.sh@42 -- # jq -r . 00:06:39.928 [2024-11-19 14:07:38.279971] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:39.928 [2024-11-19 14:07:38.280083] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59509 ] 00:06:39.928 [2024-11-19 14:07:38.425683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.186 [2024-11-19 14:07:38.628486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.091 14:07:40 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:42.091 00:06:42.091 SPDK Configuration: 00:06:42.091 Core mask: 0x1 00:06:42.091 00:06:42.091 Accel Perf Configuration: 00:06:42.091 Workload Type: decompress 00:06:42.091 Transfer size: 111250 bytes 00:06:42.091 Vector count 1 00:06:42.091 Module: software 00:06:42.091 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:42.091 Queue depth: 32 00:06:42.091 Allocate depth: 32 00:06:42.091 # threads/core: 1 00:06:42.091 Run time: 1 seconds 00:06:42.091 Verify: Yes 00:06:42.091 00:06:42.091 Running for 1 seconds... 00:06:42.091 00:06:42.091 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:42.091 ------------------------------------------------------------------------------------ 00:06:42.091 0,0 4320/s 178 MiB/s 0 0 00:06:42.091 ==================================================================================== 00:06:42.091 Total 4320/s 458 MiB/s 0 0' 00:06:42.091 14:07:40 -- accel/accel.sh@20 -- # IFS=: 00:06:42.091 14:07:40 -- accel/accel.sh@20 -- # read -r var val 00:06:42.091 14:07:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:42.091 14:07:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:42.091 14:07:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.091 14:07:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.091 14:07:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.091 14:07:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.091 14:07:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.091 14:07:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.091 14:07:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.091 14:07:40 -- accel/accel.sh@42 -- # jq -r . 00:06:42.091 [2024-11-19 14:07:40.459756] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:42.091 [2024-11-19 14:07:40.459899] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59535 ] 00:06:42.091 [2024-11-19 14:07:40.610098] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.352 [2024-11-19 14:07:40.752373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.352 14:07:40 -- accel/accel.sh@21 -- # val= 00:06:42.352 14:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.353 14:07:40 -- accel/accel.sh@20 -- # IFS=: 00:06:42.353 14:07:40 -- accel/accel.sh@20 -- # read -r var val 00:06:42.353 14:07:40 -- accel/accel.sh@21 -- # val= 00:06:42.353 14:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.353 14:07:40 -- accel/accel.sh@20 -- # IFS=: 00:06:42.353 14:07:40 -- accel/accel.sh@20 -- # read -r var val 00:06:42.353 14:07:40 -- accel/accel.sh@21 -- # val= 00:06:42.353 14:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.353 14:07:40 -- accel/accel.sh@20 -- # IFS=: 00:06:42.353 14:07:40 -- accel/accel.sh@20 -- # read -r var val 00:06:42.353 14:07:40 -- accel/accel.sh@21 -- # val=0x1 00:06:42.353 14:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.353 14:07:40 -- accel/accel.sh@20 -- # IFS=: 00:06:42.353 14:07:40 -- accel/accel.sh@20 -- # read -r var val 00:06:42.353 14:07:40 -- accel/accel.sh@21 -- # val= 00:06:42.353 14:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.353 14:07:40 -- accel/accel.sh@20 -- # IFS=: 00:06:42.353 14:07:40 -- accel/accel.sh@20 -- # read -r var val 00:06:42.353 14:07:40 -- accel/accel.sh@21 -- # val= 00:06:42.353 14:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.353 14:07:40 -- accel/accel.sh@20 -- # IFS=: 00:06:42.353 14:07:40 -- accel/accel.sh@20 -- # read -r var val 00:06:42.353 14:07:40 -- accel/accel.sh@21 -- # val=decompress 00:06:42.353 14:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.353 14:07:40 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:42.353 14:07:40 -- accel/accel.sh@20 -- # IFS=: 00:06:42.353 14:07:40 -- accel/accel.sh@20 -- # read -r var val 00:06:42.353 14:07:40 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:42.353 14:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.353 14:07:40 -- accel/accel.sh@20 -- # IFS=: 00:06:42.353 14:07:40 -- accel/accel.sh@20 -- # read -r var val 00:06:42.353 14:07:40 -- accel/accel.sh@21 -- # val= 00:06:42.353 14:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.353 14:07:40 -- accel/accel.sh@20 -- # IFS=: 00:06:42.353 14:07:40 -- accel/accel.sh@20 -- # read -r var val 00:06:42.353 14:07:40 -- accel/accel.sh@21 -- # val=software 00:06:42.353 14:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.353 14:07:40 -- accel/accel.sh@23 -- # accel_module=software 00:06:42.353 14:07:40 -- accel/accel.sh@20 -- # IFS=: 00:06:42.353 14:07:40 -- accel/accel.sh@20 -- # read -r var val 00:06:42.353 14:07:40 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:42.353 14:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.353 14:07:40 -- accel/accel.sh@20 -- # IFS=: 00:06:42.353 14:07:40 -- accel/accel.sh@20 -- # read -r var val 00:06:42.353 14:07:40 -- accel/accel.sh@21 -- # val=32 00:06:42.353 14:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.353 14:07:40 -- accel/accel.sh@20 -- # IFS=: 00:06:42.353 14:07:40 -- accel/accel.sh@20 -- # read -r var val 00:06:42.353 14:07:40 -- accel/accel.sh@21 -- # val=32 00:06:42.353 14:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.353 14:07:40 -- accel/accel.sh@20 -- # IFS=: 00:06:42.353 14:07:40 -- accel/accel.sh@20 -- # read -r var val 00:06:42.353 14:07:40 -- accel/accel.sh@21 -- # val=1 00:06:42.353 14:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.353 14:07:40 -- accel/accel.sh@20 -- # IFS=: 00:06:42.353 14:07:40 -- accel/accel.sh@20 -- # read -r var val 00:06:42.353 14:07:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:42.353 14:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.353 14:07:40 -- accel/accel.sh@20 -- # IFS=: 00:06:42.353 14:07:40 -- accel/accel.sh@20 -- # read -r var val 00:06:42.353 14:07:40 -- accel/accel.sh@21 -- # val=Yes 00:06:42.353 14:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.353 14:07:40 -- accel/accel.sh@20 -- # IFS=: 00:06:42.353 14:07:40 -- accel/accel.sh@20 -- # read -r var val 00:06:42.353 14:07:40 -- accel/accel.sh@21 -- # val= 00:06:42.353 14:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.353 14:07:40 -- accel/accel.sh@20 -- # IFS=: 00:06:42.353 14:07:40 -- accel/accel.sh@20 -- # read -r var val 00:06:42.353 14:07:40 -- accel/accel.sh@21 -- # val= 00:06:42.353 14:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.353 14:07:40 -- accel/accel.sh@20 -- # IFS=: 00:06:42.353 14:07:40 -- accel/accel.sh@20 -- # read -r var val 00:06:44.262 14:07:42 -- accel/accel.sh@21 -- # val= 00:06:44.262 14:07:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.262 14:07:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.262 14:07:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.262 14:07:42 -- accel/accel.sh@21 -- # val= 00:06:44.262 14:07:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.262 14:07:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.262 14:07:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.262 14:07:42 -- accel/accel.sh@21 -- # val= 00:06:44.262 14:07:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.262 14:07:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.262 14:07:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.262 14:07:42 -- accel/accel.sh@21 -- # val= 00:06:44.262 14:07:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.262 14:07:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.262 14:07:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.262 14:07:42 -- accel/accel.sh@21 -- # val= 00:06:44.262 14:07:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.262 14:07:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.262 14:07:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.262 14:07:42 -- accel/accel.sh@21 -- # val= 00:06:44.262 14:07:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.262 14:07:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.262 14:07:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.262 14:07:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:44.262 14:07:42 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:44.262 14:07:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.262 00:06:44.262 real 0m4.095s 00:06:44.262 user 0m3.648s 00:06:44.262 sys 0m0.240s 00:06:44.262 14:07:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:44.262 ************************************ 00:06:44.262 END TEST accel_decmop_full 00:06:44.262 ************************************ 00:06:44.262 14:07:42 -- common/autotest_common.sh@10 -- # set +x 00:06:44.262 14:07:42 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:44.262 14:07:42 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:44.263 14:07:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.263 14:07:42 -- common/autotest_common.sh@10 -- # set +x 00:06:44.263 ************************************ 00:06:44.263 START TEST accel_decomp_mcore 00:06:44.263 ************************************ 00:06:44.263 14:07:42 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:44.263 14:07:42 -- accel/accel.sh@16 -- # local accel_opc 00:06:44.263 14:07:42 -- accel/accel.sh@17 -- # local accel_module 00:06:44.263 14:07:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:44.263 14:07:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:44.263 14:07:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.263 14:07:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.263 14:07:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.263 14:07:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.263 14:07:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.263 14:07:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.263 14:07:42 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.263 14:07:42 -- accel/accel.sh@42 -- # jq -r . 00:06:44.263 [2024-11-19 14:07:42.430422] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:44.263 [2024-11-19 14:07:42.430526] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59578 ] 00:06:44.263 [2024-11-19 14:07:42.578125] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:44.263 [2024-11-19 14:07:42.720921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.263 [2024-11-19 14:07:42.721002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:44.263 [2024-11-19 14:07:42.720990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.263 [2024-11-19 14:07:42.720966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.162 14:07:44 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:46.162 00:06:46.162 SPDK Configuration: 00:06:46.162 Core mask: 0xf 00:06:46.162 00:06:46.162 Accel Perf Configuration: 00:06:46.162 Workload Type: decompress 00:06:46.162 Transfer size: 4096 bytes 00:06:46.162 Vector count 1 00:06:46.162 Module: software 00:06:46.162 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:46.162 Queue depth: 32 00:06:46.162 Allocate depth: 32 00:06:46.162 # threads/core: 1 00:06:46.162 Run time: 1 seconds 00:06:46.162 Verify: Yes 00:06:46.162 00:06:46.162 Running for 1 seconds... 00:06:46.162 00:06:46.162 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:46.162 ------------------------------------------------------------------------------------ 00:06:46.162 0,0 76256/s 140 MiB/s 0 0 00:06:46.162 3,0 58688/s 108 MiB/s 0 0 00:06:46.162 2,0 58336/s 107 MiB/s 0 0 00:06:46.162 1,0 58784/s 108 MiB/s 0 0 00:06:46.162 ==================================================================================== 00:06:46.162 Total 252064/s 984 MiB/s 0 0' 00:06:46.162 14:07:44 -- accel/accel.sh@20 -- # IFS=: 00:06:46.162 14:07:44 -- accel/accel.sh@20 -- # read -r var val 00:06:46.162 14:07:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:46.162 14:07:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:46.162 14:07:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.162 14:07:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.162 14:07:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.162 14:07:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.162 14:07:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.162 14:07:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.162 14:07:44 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.162 14:07:44 -- accel/accel.sh@42 -- # jq -r . 00:06:46.162 [2024-11-19 14:07:44.358628] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:46.162 [2024-11-19 14:07:44.358741] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59607 ] 00:06:46.162 [2024-11-19 14:07:44.508639] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:46.162 [2024-11-19 14:07:44.699847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.162 [2024-11-19 14:07:44.700003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.162 [2024-11-19 14:07:44.700413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.162 [2024-11-19 14:07:44.700621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.420 14:07:44 -- accel/accel.sh@21 -- # val= 00:06:46.420 14:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.420 14:07:44 -- accel/accel.sh@20 -- # IFS=: 00:06:46.420 14:07:44 -- accel/accel.sh@20 -- # read -r var val 00:06:46.420 14:07:44 -- accel/accel.sh@21 -- # val= 00:06:46.420 14:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.420 14:07:44 -- accel/accel.sh@20 -- # IFS=: 00:06:46.420 14:07:44 -- accel/accel.sh@20 -- # read -r var val 00:06:46.420 14:07:44 -- accel/accel.sh@21 -- # val= 00:06:46.420 14:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.420 14:07:44 -- accel/accel.sh@20 -- # IFS=: 00:06:46.420 14:07:44 -- accel/accel.sh@20 -- # read -r var val 00:06:46.420 14:07:44 -- accel/accel.sh@21 -- # val=0xf 00:06:46.420 14:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.420 14:07:44 -- accel/accel.sh@20 -- # IFS=: 00:06:46.420 14:07:44 -- accel/accel.sh@20 -- # read -r var val 00:06:46.420 14:07:44 -- accel/accel.sh@21 -- # val= 00:06:46.420 14:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.420 14:07:44 -- accel/accel.sh@20 -- # IFS=: 00:06:46.420 14:07:44 -- accel/accel.sh@20 -- # read -r var val 00:06:46.420 14:07:44 -- accel/accel.sh@21 -- # val= 00:06:46.420 14:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.420 14:07:44 -- accel/accel.sh@20 -- # IFS=: 00:06:46.420 14:07:44 -- accel/accel.sh@20 -- # read -r var val 00:06:46.420 14:07:44 -- accel/accel.sh@21 -- # val=decompress 00:06:46.420 14:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.420 14:07:44 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:46.420 14:07:44 -- accel/accel.sh@20 -- # IFS=: 00:06:46.420 14:07:44 -- accel/accel.sh@20 -- # read -r var val 00:06:46.420 14:07:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:46.420 14:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.420 14:07:44 -- accel/accel.sh@20 -- # IFS=: 00:06:46.420 14:07:44 -- accel/accel.sh@20 -- # read -r var val 00:06:46.420 14:07:44 -- accel/accel.sh@21 -- # val= 00:06:46.420 14:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.420 14:07:44 -- accel/accel.sh@20 -- # IFS=: 00:06:46.420 14:07:44 -- accel/accel.sh@20 -- # read -r var val 00:06:46.420 14:07:44 -- accel/accel.sh@21 -- # val=software 00:06:46.420 14:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.420 14:07:44 -- accel/accel.sh@23 -- # accel_module=software 00:06:46.420 14:07:44 -- accel/accel.sh@20 -- # IFS=: 00:06:46.420 14:07:44 -- accel/accel.sh@20 -- # read -r var val 00:06:46.420 14:07:44 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:46.420 14:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.420 14:07:44 -- accel/accel.sh@20 -- # IFS=: 00:06:46.420 14:07:44 -- accel/accel.sh@20 -- # read -r var val 00:06:46.420 14:07:44 -- accel/accel.sh@21 -- # val=32 00:06:46.420 14:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.420 14:07:44 -- accel/accel.sh@20 -- # IFS=: 00:06:46.420 14:07:44 -- accel/accel.sh@20 -- # read -r var val 00:06:46.420 14:07:44 -- accel/accel.sh@21 -- # val=32 00:06:46.420 14:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.420 14:07:44 -- accel/accel.sh@20 -- # IFS=: 00:06:46.420 14:07:44 -- accel/accel.sh@20 -- # read -r var val 00:06:46.420 14:07:44 -- accel/accel.sh@21 -- # val=1 00:06:46.420 14:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.420 14:07:44 -- accel/accel.sh@20 -- # IFS=: 00:06:46.420 14:07:44 -- accel/accel.sh@20 -- # read -r var val 00:06:46.420 14:07:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:46.420 14:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.420 14:07:44 -- accel/accel.sh@20 -- # IFS=: 00:06:46.420 14:07:44 -- accel/accel.sh@20 -- # read -r var val 00:06:46.420 14:07:44 -- accel/accel.sh@21 -- # val=Yes 00:06:46.420 14:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.420 14:07:44 -- accel/accel.sh@20 -- # IFS=: 00:06:46.420 14:07:44 -- accel/accel.sh@20 -- # read -r var val 00:06:46.420 14:07:44 -- accel/accel.sh@21 -- # val= 00:06:46.420 14:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.420 14:07:44 -- accel/accel.sh@20 -- # IFS=: 00:06:46.420 14:07:44 -- accel/accel.sh@20 -- # read -r var val 00:06:46.420 14:07:44 -- accel/accel.sh@21 -- # val= 00:06:46.420 14:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.420 14:07:44 -- accel/accel.sh@20 -- # IFS=: 00:06:46.420 14:07:44 -- accel/accel.sh@20 -- # read -r var val 00:06:47.794 14:07:46 -- accel/accel.sh@21 -- # val= 00:06:47.794 14:07:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.794 14:07:46 -- accel/accel.sh@20 -- # IFS=: 00:06:47.794 14:07:46 -- accel/accel.sh@20 -- # read -r var val 00:06:47.794 14:07:46 -- accel/accel.sh@21 -- # val= 00:06:47.794 14:07:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.794 14:07:46 -- accel/accel.sh@20 -- # IFS=: 00:06:47.794 14:07:46 -- accel/accel.sh@20 -- # read -r var val 00:06:47.794 14:07:46 -- accel/accel.sh@21 -- # val= 00:06:47.794 14:07:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.794 14:07:46 -- accel/accel.sh@20 -- # IFS=: 00:06:47.794 14:07:46 -- accel/accel.sh@20 -- # read -r var val 00:06:47.794 14:07:46 -- accel/accel.sh@21 -- # val= 00:06:47.794 14:07:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.794 14:07:46 -- accel/accel.sh@20 -- # IFS=: 00:06:47.794 14:07:46 -- accel/accel.sh@20 -- # read -r var val 00:06:47.794 14:07:46 -- accel/accel.sh@21 -- # val= 00:06:47.794 14:07:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.794 14:07:46 -- accel/accel.sh@20 -- # IFS=: 00:06:47.794 14:07:46 -- accel/accel.sh@20 -- # read -r var val 00:06:47.794 14:07:46 -- accel/accel.sh@21 -- # val= 00:06:47.794 14:07:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.794 14:07:46 -- accel/accel.sh@20 -- # IFS=: 00:06:47.794 14:07:46 -- accel/accel.sh@20 -- # read -r var val 00:06:47.794 14:07:46 -- accel/accel.sh@21 -- # val= 00:06:47.794 14:07:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.794 14:07:46 -- accel/accel.sh@20 -- # IFS=: 00:06:47.794 14:07:46 -- accel/accel.sh@20 -- # read -r var val 00:06:47.794 14:07:46 -- accel/accel.sh@21 -- # val= 00:06:47.794 14:07:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.794 14:07:46 -- accel/accel.sh@20 -- # IFS=: 00:06:47.794 14:07:46 -- accel/accel.sh@20 -- # read -r var val 00:06:47.794 14:07:46 -- accel/accel.sh@21 -- # val= 00:06:47.794 14:07:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.794 14:07:46 -- accel/accel.sh@20 -- # IFS=: 00:06:47.794 14:07:46 -- accel/accel.sh@20 -- # read -r var val 00:06:47.794 14:07:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:47.794 14:07:46 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:47.794 14:07:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.794 ************************************ 00:06:47.794 END TEST accel_decomp_mcore 00:06:47.794 ************************************ 00:06:47.794 00:06:47.794 real 0m3.939s 00:06:47.794 user 0m11.837s 00:06:47.794 sys 0m0.270s 00:06:47.794 14:07:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:47.794 14:07:46 -- common/autotest_common.sh@10 -- # set +x 00:06:48.052 14:07:46 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:48.052 14:07:46 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:48.052 14:07:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:48.052 14:07:46 -- common/autotest_common.sh@10 -- # set +x 00:06:48.052 ************************************ 00:06:48.052 START TEST accel_decomp_full_mcore 00:06:48.052 ************************************ 00:06:48.052 14:07:46 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:48.052 14:07:46 -- accel/accel.sh@16 -- # local accel_opc 00:06:48.052 14:07:46 -- accel/accel.sh@17 -- # local accel_module 00:06:48.052 14:07:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:48.052 14:07:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:48.052 14:07:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.052 14:07:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.052 14:07:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.052 14:07:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.052 14:07:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.052 14:07:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.052 14:07:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.052 14:07:46 -- accel/accel.sh@42 -- # jq -r . 00:06:48.052 [2024-11-19 14:07:46.430486] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:48.052 [2024-11-19 14:07:46.430591] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59651 ] 00:06:48.052 [2024-11-19 14:07:46.573070] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:48.310 [2024-11-19 14:07:46.714317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.310 [2024-11-19 14:07:46.714610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.310 [2024-11-19 14:07:46.714831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.310 [2024-11-19 14:07:46.714845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.210 14:07:48 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:50.210 00:06:50.210 SPDK Configuration: 00:06:50.210 Core mask: 0xf 00:06:50.210 00:06:50.210 Accel Perf Configuration: 00:06:50.210 Workload Type: decompress 00:06:50.210 Transfer size: 111250 bytes 00:06:50.210 Vector count 1 00:06:50.210 Module: software 00:06:50.210 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:50.210 Queue depth: 32 00:06:50.210 Allocate depth: 32 00:06:50.210 # threads/core: 1 00:06:50.210 Run time: 1 seconds 00:06:50.210 Verify: Yes 00:06:50.210 00:06:50.210 Running for 1 seconds... 00:06:50.210 00:06:50.210 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:50.210 ------------------------------------------------------------------------------------ 00:06:50.210 0,0 5600/s 231 MiB/s 0 0 00:06:50.210 3,0 4320/s 178 MiB/s 0 0 00:06:50.210 2,0 4320/s 178 MiB/s 0 0 00:06:50.210 1,0 4352/s 179 MiB/s 0 0 00:06:50.210 ==================================================================================== 00:06:50.210 Total 18592/s 1972 MiB/s 0 0' 00:06:50.210 14:07:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:50.210 14:07:48 -- accel/accel.sh@20 -- # IFS=: 00:06:50.210 14:07:48 -- accel/accel.sh@20 -- # read -r var val 00:06:50.210 14:07:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:50.210 14:07:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.210 14:07:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.210 14:07:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.210 14:07:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.210 14:07:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.210 14:07:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.210 14:07:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.210 14:07:48 -- accel/accel.sh@42 -- # jq -r . 00:06:50.210 [2024-11-19 14:07:48.375057] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:50.210 [2024-11-19 14:07:48.375172] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59680 ] 00:06:50.210 [2024-11-19 14:07:48.521988] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:50.210 [2024-11-19 14:07:48.667555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.211 [2024-11-19 14:07:48.667693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.211 [2024-11-19 14:07:48.668359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.211 [2024-11-19 14:07:48.668378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.468 14:07:48 -- accel/accel.sh@21 -- # val= 00:06:50.468 14:07:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.468 14:07:48 -- accel/accel.sh@20 -- # IFS=: 00:06:50.468 14:07:48 -- accel/accel.sh@20 -- # read -r var val 00:06:50.468 14:07:48 -- accel/accel.sh@21 -- # val= 00:06:50.468 14:07:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.468 14:07:48 -- accel/accel.sh@20 -- # IFS=: 00:06:50.468 14:07:48 -- accel/accel.sh@20 -- # read -r var val 00:06:50.468 14:07:48 -- accel/accel.sh@21 -- # val= 00:06:50.468 14:07:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.468 14:07:48 -- accel/accel.sh@20 -- # IFS=: 00:06:50.468 14:07:48 -- accel/accel.sh@20 -- # read -r var val 00:06:50.468 14:07:48 -- accel/accel.sh@21 -- # val=0xf 00:06:50.468 14:07:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.468 14:07:48 -- accel/accel.sh@20 -- # IFS=: 00:06:50.468 14:07:48 -- accel/accel.sh@20 -- # read -r var val 00:06:50.468 14:07:48 -- accel/accel.sh@21 -- # val= 00:06:50.468 14:07:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.468 14:07:48 -- accel/accel.sh@20 -- # IFS=: 00:06:50.468 14:07:48 -- accel/accel.sh@20 -- # read -r var val 00:06:50.468 14:07:48 -- accel/accel.sh@21 -- # val= 00:06:50.468 14:07:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.468 14:07:48 -- accel/accel.sh@20 -- # IFS=: 00:06:50.468 14:07:48 -- accel/accel.sh@20 -- # read -r var val 00:06:50.468 14:07:48 -- accel/accel.sh@21 -- # val=decompress 00:06:50.468 14:07:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.468 14:07:48 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:50.468 14:07:48 -- accel/accel.sh@20 -- # IFS=: 00:06:50.468 14:07:48 -- accel/accel.sh@20 -- # read -r var val 00:06:50.468 14:07:48 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:50.468 14:07:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.468 14:07:48 -- accel/accel.sh@20 -- # IFS=: 00:06:50.468 14:07:48 -- accel/accel.sh@20 -- # read -r var val 00:06:50.468 14:07:48 -- accel/accel.sh@21 -- # val= 00:06:50.468 14:07:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.468 14:07:48 -- accel/accel.sh@20 -- # IFS=: 00:06:50.468 14:07:48 -- accel/accel.sh@20 -- # read -r var val 00:06:50.468 14:07:48 -- accel/accel.sh@21 -- # val=software 00:06:50.468 14:07:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.468 14:07:48 -- accel/accel.sh@23 -- # accel_module=software 00:06:50.468 14:07:48 -- accel/accel.sh@20 -- # IFS=: 00:06:50.468 14:07:48 -- accel/accel.sh@20 -- # read -r var val 00:06:50.468 14:07:48 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:50.468 14:07:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.468 14:07:48 -- accel/accel.sh@20 -- # IFS=: 00:06:50.468 14:07:48 -- accel/accel.sh@20 -- # read -r var val 00:06:50.468 14:07:48 -- accel/accel.sh@21 -- # val=32 00:06:50.468 14:07:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.468 14:07:48 -- accel/accel.sh@20 -- # IFS=: 00:06:50.468 14:07:48 -- accel/accel.sh@20 -- # read -r var val 00:06:50.468 14:07:48 -- accel/accel.sh@21 -- # val=32 00:06:50.469 14:07:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.469 14:07:48 -- accel/accel.sh@20 -- # IFS=: 00:06:50.469 14:07:48 -- accel/accel.sh@20 -- # read -r var val 00:06:50.469 14:07:48 -- accel/accel.sh@21 -- # val=1 00:06:50.469 14:07:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.469 14:07:48 -- accel/accel.sh@20 -- # IFS=: 00:06:50.469 14:07:48 -- accel/accel.sh@20 -- # read -r var val 00:06:50.469 14:07:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:50.469 14:07:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.469 14:07:48 -- accel/accel.sh@20 -- # IFS=: 00:06:50.469 14:07:48 -- accel/accel.sh@20 -- # read -r var val 00:06:50.469 14:07:48 -- accel/accel.sh@21 -- # val=Yes 00:06:50.469 14:07:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.469 14:07:48 -- accel/accel.sh@20 -- # IFS=: 00:06:50.469 14:07:48 -- accel/accel.sh@20 -- # read -r var val 00:06:50.469 14:07:48 -- accel/accel.sh@21 -- # val= 00:06:50.469 14:07:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.469 14:07:48 -- accel/accel.sh@20 -- # IFS=: 00:06:50.469 14:07:48 -- accel/accel.sh@20 -- # read -r var val 00:06:50.469 14:07:48 -- accel/accel.sh@21 -- # val= 00:06:50.469 14:07:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.469 14:07:48 -- accel/accel.sh@20 -- # IFS=: 00:06:50.469 14:07:48 -- accel/accel.sh@20 -- # read -r var val 00:06:51.841 14:07:50 -- accel/accel.sh@21 -- # val= 00:06:51.841 14:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.841 14:07:50 -- accel/accel.sh@20 -- # IFS=: 00:06:51.841 14:07:50 -- accel/accel.sh@20 -- # read -r var val 00:06:51.841 14:07:50 -- accel/accel.sh@21 -- # val= 00:06:51.841 14:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.841 14:07:50 -- accel/accel.sh@20 -- # IFS=: 00:06:51.841 14:07:50 -- accel/accel.sh@20 -- # read -r var val 00:06:51.841 14:07:50 -- accel/accel.sh@21 -- # val= 00:06:51.841 14:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.841 14:07:50 -- accel/accel.sh@20 -- # IFS=: 00:06:51.841 14:07:50 -- accel/accel.sh@20 -- # read -r var val 00:06:51.841 14:07:50 -- accel/accel.sh@21 -- # val= 00:06:51.841 14:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.841 14:07:50 -- accel/accel.sh@20 -- # IFS=: 00:06:51.841 14:07:50 -- accel/accel.sh@20 -- # read -r var val 00:06:51.841 14:07:50 -- accel/accel.sh@21 -- # val= 00:06:51.841 14:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.841 14:07:50 -- accel/accel.sh@20 -- # IFS=: 00:06:51.841 14:07:50 -- accel/accel.sh@20 -- # read -r var val 00:06:51.841 14:07:50 -- accel/accel.sh@21 -- # val= 00:06:51.841 14:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.841 14:07:50 -- accel/accel.sh@20 -- # IFS=: 00:06:51.841 14:07:50 -- accel/accel.sh@20 -- # read -r var val 00:06:51.841 14:07:50 -- accel/accel.sh@21 -- # val= 00:06:51.841 14:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.841 14:07:50 -- accel/accel.sh@20 -- # IFS=: 00:06:51.841 14:07:50 -- accel/accel.sh@20 -- # read -r var val 00:06:51.841 14:07:50 -- accel/accel.sh@21 -- # val= 00:06:51.841 14:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.841 14:07:50 -- accel/accel.sh@20 -- # IFS=: 00:06:51.841 14:07:50 -- accel/accel.sh@20 -- # read -r var val 00:06:51.841 14:07:50 -- accel/accel.sh@21 -- # val= 00:06:51.841 14:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.841 14:07:50 -- accel/accel.sh@20 -- # IFS=: 00:06:51.841 14:07:50 -- accel/accel.sh@20 -- # read -r var val 00:06:51.841 14:07:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:51.841 14:07:50 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:51.841 14:07:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.841 00:06:51.841 real 0m3.895s 00:06:51.841 user 0m11.836s 00:06:51.841 sys 0m0.258s 00:06:51.841 14:07:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:51.841 14:07:50 -- common/autotest_common.sh@10 -- # set +x 00:06:51.841 ************************************ 00:06:51.841 END TEST accel_decomp_full_mcore 00:06:51.841 ************************************ 00:06:51.841 14:07:50 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:51.841 14:07:50 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:51.841 14:07:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.841 14:07:50 -- common/autotest_common.sh@10 -- # set +x 00:06:51.841 ************************************ 00:06:51.841 START TEST accel_decomp_mthread 00:06:51.841 ************************************ 00:06:51.841 14:07:50 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:51.841 14:07:50 -- accel/accel.sh@16 -- # local accel_opc 00:06:51.841 14:07:50 -- accel/accel.sh@17 -- # local accel_module 00:06:51.841 14:07:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:51.841 14:07:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:51.842 14:07:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.842 14:07:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.842 14:07:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.842 14:07:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.842 14:07:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.842 14:07:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.842 14:07:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.842 14:07:50 -- accel/accel.sh@42 -- # jq -r . 00:06:51.842 [2024-11-19 14:07:50.353726] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:51.842 [2024-11-19 14:07:50.353943] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59723 ] 00:06:52.099 [2024-11-19 14:07:50.501252] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.099 [2024-11-19 14:07:50.648341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.030 14:07:52 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:54.030 00:06:54.030 SPDK Configuration: 00:06:54.030 Core mask: 0x1 00:06:54.030 00:06:54.030 Accel Perf Configuration: 00:06:54.030 Workload Type: decompress 00:06:54.030 Transfer size: 4096 bytes 00:06:54.030 Vector count 1 00:06:54.030 Module: software 00:06:54.030 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:54.030 Queue depth: 32 00:06:54.030 Allocate depth: 32 00:06:54.030 # threads/core: 2 00:06:54.030 Run time: 1 seconds 00:06:54.030 Verify: Yes 00:06:54.030 00:06:54.030 Running for 1 seconds... 00:06:54.030 00:06:54.030 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:54.030 ------------------------------------------------------------------------------------ 00:06:54.030 0,1 39264/s 72 MiB/s 0 0 00:06:54.030 0,0 39168/s 72 MiB/s 0 0 00:06:54.030 ==================================================================================== 00:06:54.030 Total 78432/s 306 MiB/s 0 0' 00:06:54.030 14:07:52 -- accel/accel.sh@20 -- # IFS=: 00:06:54.030 14:07:52 -- accel/accel.sh@20 -- # read -r var val 00:06:54.030 14:07:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:54.030 14:07:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:54.030 14:07:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.030 14:07:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.030 14:07:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.030 14:07:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.030 14:07:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.030 14:07:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.030 14:07:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.030 14:07:52 -- accel/accel.sh@42 -- # jq -r . 00:06:54.030 [2024-11-19 14:07:52.281140] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:54.030 [2024-11-19 14:07:52.281245] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59745 ] 00:06:54.030 [2024-11-19 14:07:52.427527] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.287 [2024-11-19 14:07:52.612051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.287 14:07:52 -- accel/accel.sh@21 -- # val= 00:06:54.287 14:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.287 14:07:52 -- accel/accel.sh@20 -- # IFS=: 00:06:54.287 14:07:52 -- accel/accel.sh@20 -- # read -r var val 00:06:54.287 14:07:52 -- accel/accel.sh@21 -- # val= 00:06:54.287 14:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.287 14:07:52 -- accel/accel.sh@20 -- # IFS=: 00:06:54.287 14:07:52 -- accel/accel.sh@20 -- # read -r var val 00:06:54.287 14:07:52 -- accel/accel.sh@21 -- # val= 00:06:54.287 14:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.287 14:07:52 -- accel/accel.sh@20 -- # IFS=: 00:06:54.287 14:07:52 -- accel/accel.sh@20 -- # read -r var val 00:06:54.287 14:07:52 -- accel/accel.sh@21 -- # val=0x1 00:06:54.287 14:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.287 14:07:52 -- accel/accel.sh@20 -- # IFS=: 00:06:54.287 14:07:52 -- accel/accel.sh@20 -- # read -r var val 00:06:54.287 14:07:52 -- accel/accel.sh@21 -- # val= 00:06:54.287 14:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.287 14:07:52 -- accel/accel.sh@20 -- # IFS=: 00:06:54.287 14:07:52 -- accel/accel.sh@20 -- # read -r var val 00:06:54.287 14:07:52 -- accel/accel.sh@21 -- # val= 00:06:54.287 14:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.287 14:07:52 -- accel/accel.sh@20 -- # IFS=: 00:06:54.287 14:07:52 -- accel/accel.sh@20 -- # read -r var val 00:06:54.287 14:07:52 -- accel/accel.sh@21 -- # val=decompress 00:06:54.287 14:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.287 14:07:52 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:54.287 14:07:52 -- accel/accel.sh@20 -- # IFS=: 00:06:54.287 14:07:52 -- accel/accel.sh@20 -- # read -r var val 00:06:54.287 14:07:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:54.288 14:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.288 14:07:52 -- accel/accel.sh@20 -- # IFS=: 00:06:54.288 14:07:52 -- accel/accel.sh@20 -- # read -r var val 00:06:54.288 14:07:52 -- accel/accel.sh@21 -- # val= 00:06:54.288 14:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.288 14:07:52 -- accel/accel.sh@20 -- # IFS=: 00:06:54.288 14:07:52 -- accel/accel.sh@20 -- # read -r var val 00:06:54.288 14:07:52 -- accel/accel.sh@21 -- # val=software 00:06:54.288 14:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.288 14:07:52 -- accel/accel.sh@23 -- # accel_module=software 00:06:54.288 14:07:52 -- accel/accel.sh@20 -- # IFS=: 00:06:54.288 14:07:52 -- accel/accel.sh@20 -- # read -r var val 00:06:54.288 14:07:52 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:54.288 14:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.288 14:07:52 -- accel/accel.sh@20 -- # IFS=: 00:06:54.288 14:07:52 -- accel/accel.sh@20 -- # read -r var val 00:06:54.288 14:07:52 -- accel/accel.sh@21 -- # val=32 00:06:54.288 14:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.288 14:07:52 -- accel/accel.sh@20 -- # IFS=: 00:06:54.288 14:07:52 -- accel/accel.sh@20 -- # read -r var val 00:06:54.288 14:07:52 -- accel/accel.sh@21 -- # val=32 00:06:54.288 14:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.288 14:07:52 -- accel/accel.sh@20 -- # IFS=: 00:06:54.288 14:07:52 -- accel/accel.sh@20 -- # read -r var val 00:06:54.288 14:07:52 -- accel/accel.sh@21 -- # val=2 00:06:54.288 14:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.288 14:07:52 -- accel/accel.sh@20 -- # IFS=: 00:06:54.288 14:07:52 -- accel/accel.sh@20 -- # read -r var val 00:06:54.288 14:07:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:54.288 14:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.288 14:07:52 -- accel/accel.sh@20 -- # IFS=: 00:06:54.288 14:07:52 -- accel/accel.sh@20 -- # read -r var val 00:06:54.288 14:07:52 -- accel/accel.sh@21 -- # val=Yes 00:06:54.288 14:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.288 14:07:52 -- accel/accel.sh@20 -- # IFS=: 00:06:54.288 14:07:52 -- accel/accel.sh@20 -- # read -r var val 00:06:54.288 14:07:52 -- accel/accel.sh@21 -- # val= 00:06:54.288 14:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.288 14:07:52 -- accel/accel.sh@20 -- # IFS=: 00:06:54.288 14:07:52 -- accel/accel.sh@20 -- # read -r var val 00:06:54.288 14:07:52 -- accel/accel.sh@21 -- # val= 00:06:54.288 14:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.288 14:07:52 -- accel/accel.sh@20 -- # IFS=: 00:06:54.288 14:07:52 -- accel/accel.sh@20 -- # read -r var val 00:06:56.189 14:07:54 -- accel/accel.sh@21 -- # val= 00:06:56.189 14:07:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.189 14:07:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.189 14:07:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.189 14:07:54 -- accel/accel.sh@21 -- # val= 00:06:56.189 14:07:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.189 14:07:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.189 14:07:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.189 14:07:54 -- accel/accel.sh@21 -- # val= 00:06:56.189 14:07:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.189 14:07:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.189 14:07:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.189 14:07:54 -- accel/accel.sh@21 -- # val= 00:06:56.189 14:07:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.189 14:07:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.189 14:07:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.189 14:07:54 -- accel/accel.sh@21 -- # val= 00:06:56.189 14:07:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.189 14:07:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.189 14:07:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.189 14:07:54 -- accel/accel.sh@21 -- # val= 00:06:56.189 14:07:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.189 14:07:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.189 14:07:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.189 14:07:54 -- accel/accel.sh@21 -- # val= 00:06:56.189 14:07:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.189 14:07:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.189 14:07:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.189 14:07:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:56.189 14:07:54 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:56.189 14:07:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.189 00:06:56.189 real 0m4.053s 00:06:56.189 user 0m3.606s 00:06:56.189 sys 0m0.241s 00:06:56.189 14:07:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:56.189 14:07:54 -- common/autotest_common.sh@10 -- # set +x 00:06:56.189 ************************************ 00:06:56.189 END TEST accel_decomp_mthread 00:06:56.189 ************************************ 00:06:56.189 14:07:54 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:56.189 14:07:54 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:56.189 14:07:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:56.189 14:07:54 -- common/autotest_common.sh@10 -- # set +x 00:06:56.189 ************************************ 00:06:56.189 START TEST accel_deomp_full_mthread 00:06:56.189 ************************************ 00:06:56.189 14:07:54 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:56.189 14:07:54 -- accel/accel.sh@16 -- # local accel_opc 00:06:56.189 14:07:54 -- accel/accel.sh@17 -- # local accel_module 00:06:56.189 14:07:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:56.189 14:07:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:56.189 14:07:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.189 14:07:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.189 14:07:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.189 14:07:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.189 14:07:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.189 14:07:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.189 14:07:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.189 14:07:54 -- accel/accel.sh@42 -- # jq -r . 00:06:56.189 [2024-11-19 14:07:54.460393] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:56.189 [2024-11-19 14:07:54.460496] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59786 ] 00:06:56.189 [2024-11-19 14:07:54.608162] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.447 [2024-11-19 14:07:54.820137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.348 14:07:56 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:58.348 00:06:58.348 SPDK Configuration: 00:06:58.348 Core mask: 0x1 00:06:58.348 00:06:58.348 Accel Perf Configuration: 00:06:58.348 Workload Type: decompress 00:06:58.348 Transfer size: 111250 bytes 00:06:58.348 Vector count 1 00:06:58.348 Module: software 00:06:58.348 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:58.348 Queue depth: 32 00:06:58.348 Allocate depth: 32 00:06:58.348 # threads/core: 2 00:06:58.348 Run time: 1 seconds 00:06:58.348 Verify: Yes 00:06:58.348 00:06:58.348 Running for 1 seconds... 00:06:58.348 00:06:58.348 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:58.348 ------------------------------------------------------------------------------------ 00:06:58.348 0,1 2208/s 91 MiB/s 0 0 00:06:58.348 0,0 2176/s 89 MiB/s 0 0 00:06:58.348 ==================================================================================== 00:06:58.348 Total 4384/s 465 MiB/s 0 0' 00:06:58.348 14:07:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.348 14:07:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.348 14:07:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:58.348 14:07:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:58.348 14:07:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.348 14:07:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.348 14:07:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.348 14:07:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.348 14:07:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.348 14:07:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.348 14:07:56 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.348 14:07:56 -- accel/accel.sh@42 -- # jq -r . 00:06:58.348 [2024-11-19 14:07:56.648982] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:58.348 [2024-11-19 14:07:56.649119] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59817 ] 00:06:58.348 [2024-11-19 14:07:56.799352] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.607 [2024-11-19 14:07:56.972294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.607 14:07:57 -- accel/accel.sh@21 -- # val= 00:06:58.607 14:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.607 14:07:57 -- accel/accel.sh@20 -- # IFS=: 00:06:58.607 14:07:57 -- accel/accel.sh@20 -- # read -r var val 00:06:58.607 14:07:57 -- accel/accel.sh@21 -- # val= 00:06:58.607 14:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.607 14:07:57 -- accel/accel.sh@20 -- # IFS=: 00:06:58.607 14:07:57 -- accel/accel.sh@20 -- # read -r var val 00:06:58.607 14:07:57 -- accel/accel.sh@21 -- # val= 00:06:58.607 14:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.607 14:07:57 -- accel/accel.sh@20 -- # IFS=: 00:06:58.607 14:07:57 -- accel/accel.sh@20 -- # read -r var val 00:06:58.607 14:07:57 -- accel/accel.sh@21 -- # val=0x1 00:06:58.607 14:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.607 14:07:57 -- accel/accel.sh@20 -- # IFS=: 00:06:58.607 14:07:57 -- accel/accel.sh@20 -- # read -r var val 00:06:58.607 14:07:57 -- accel/accel.sh@21 -- # val= 00:06:58.607 14:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.607 14:07:57 -- accel/accel.sh@20 -- # IFS=: 00:06:58.607 14:07:57 -- accel/accel.sh@20 -- # read -r var val 00:06:58.607 14:07:57 -- accel/accel.sh@21 -- # val= 00:06:58.607 14:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.607 14:07:57 -- accel/accel.sh@20 -- # IFS=: 00:06:58.607 14:07:57 -- accel/accel.sh@20 -- # read -r var val 00:06:58.607 14:07:57 -- accel/accel.sh@21 -- # val=decompress 00:06:58.607 14:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.607 14:07:57 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:58.607 14:07:57 -- accel/accel.sh@20 -- # IFS=: 00:06:58.607 14:07:57 -- accel/accel.sh@20 -- # read -r var val 00:06:58.607 14:07:57 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:58.607 14:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.607 14:07:57 -- accel/accel.sh@20 -- # IFS=: 00:06:58.607 14:07:57 -- accel/accel.sh@20 -- # read -r var val 00:06:58.607 14:07:57 -- accel/accel.sh@21 -- # val= 00:06:58.607 14:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.607 14:07:57 -- accel/accel.sh@20 -- # IFS=: 00:06:58.607 14:07:57 -- accel/accel.sh@20 -- # read -r var val 00:06:58.607 14:07:57 -- accel/accel.sh@21 -- # val=software 00:06:58.607 14:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.607 14:07:57 -- accel/accel.sh@23 -- # accel_module=software 00:06:58.607 14:07:57 -- accel/accel.sh@20 -- # IFS=: 00:06:58.607 14:07:57 -- accel/accel.sh@20 -- # read -r var val 00:06:58.607 14:07:57 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:58.607 14:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.607 14:07:57 -- accel/accel.sh@20 -- # IFS=: 00:06:58.607 14:07:57 -- accel/accel.sh@20 -- # read -r var val 00:06:58.607 14:07:57 -- accel/accel.sh@21 -- # val=32 00:06:58.607 14:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.607 14:07:57 -- accel/accel.sh@20 -- # IFS=: 00:06:58.607 14:07:57 -- accel/accel.sh@20 -- # read -r var val 00:06:58.607 14:07:57 -- accel/accel.sh@21 -- # val=32 00:06:58.607 14:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.607 14:07:57 -- accel/accel.sh@20 -- # IFS=: 00:06:58.607 14:07:57 -- accel/accel.sh@20 -- # read -r var val 00:06:58.607 14:07:57 -- accel/accel.sh@21 -- # val=2 00:06:58.607 14:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.607 14:07:57 -- accel/accel.sh@20 -- # IFS=: 00:06:58.607 14:07:57 -- accel/accel.sh@20 -- # read -r var val 00:06:58.607 14:07:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:58.607 14:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.607 14:07:57 -- accel/accel.sh@20 -- # IFS=: 00:06:58.607 14:07:57 -- accel/accel.sh@20 -- # read -r var val 00:06:58.607 14:07:57 -- accel/accel.sh@21 -- # val=Yes 00:06:58.607 14:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.607 14:07:57 -- accel/accel.sh@20 -- # IFS=: 00:06:58.607 14:07:57 -- accel/accel.sh@20 -- # read -r var val 00:06:58.607 14:07:57 -- accel/accel.sh@21 -- # val= 00:06:58.607 14:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.607 14:07:57 -- accel/accel.sh@20 -- # IFS=: 00:06:58.607 14:07:57 -- accel/accel.sh@20 -- # read -r var val 00:06:58.607 14:07:57 -- accel/accel.sh@21 -- # val= 00:06:58.607 14:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.607 14:07:57 -- accel/accel.sh@20 -- # IFS=: 00:06:58.607 14:07:57 -- accel/accel.sh@20 -- # read -r var val 00:07:00.520 14:07:58 -- accel/accel.sh@21 -- # val= 00:07:00.520 14:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.520 14:07:58 -- accel/accel.sh@20 -- # IFS=: 00:07:00.520 14:07:58 -- accel/accel.sh@20 -- # read -r var val 00:07:00.520 14:07:58 -- accel/accel.sh@21 -- # val= 00:07:00.520 14:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.520 14:07:58 -- accel/accel.sh@20 -- # IFS=: 00:07:00.520 14:07:58 -- accel/accel.sh@20 -- # read -r var val 00:07:00.520 14:07:58 -- accel/accel.sh@21 -- # val= 00:07:00.520 14:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.520 14:07:58 -- accel/accel.sh@20 -- # IFS=: 00:07:00.520 14:07:58 -- accel/accel.sh@20 -- # read -r var val 00:07:00.520 14:07:58 -- accel/accel.sh@21 -- # val= 00:07:00.520 14:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.520 14:07:58 -- accel/accel.sh@20 -- # IFS=: 00:07:00.520 14:07:58 -- accel/accel.sh@20 -- # read -r var val 00:07:00.520 14:07:58 -- accel/accel.sh@21 -- # val= 00:07:00.520 14:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.520 14:07:58 -- accel/accel.sh@20 -- # IFS=: 00:07:00.520 14:07:58 -- accel/accel.sh@20 -- # read -r var val 00:07:00.520 14:07:58 -- accel/accel.sh@21 -- # val= 00:07:00.520 14:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.520 14:07:58 -- accel/accel.sh@20 -- # IFS=: 00:07:00.520 14:07:58 -- accel/accel.sh@20 -- # read -r var val 00:07:00.520 14:07:58 -- accel/accel.sh@21 -- # val= 00:07:00.520 14:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.520 14:07:58 -- accel/accel.sh@20 -- # IFS=: 00:07:00.520 14:07:58 -- accel/accel.sh@20 -- # read -r var val 00:07:00.520 ************************************ 00:07:00.520 END TEST accel_deomp_full_mthread 00:07:00.520 ************************************ 00:07:00.520 14:07:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:00.520 14:07:58 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:00.520 14:07:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.520 00:07:00.520 real 0m4.404s 00:07:00.520 user 0m3.928s 00:07:00.520 sys 0m0.257s 00:07:00.520 14:07:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:00.520 14:07:58 -- common/autotest_common.sh@10 -- # set +x 00:07:00.520 14:07:58 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:00.520 14:07:58 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:00.520 14:07:58 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:00.520 14:07:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.520 14:07:58 -- common/autotest_common.sh@10 -- # set +x 00:07:00.520 14:07:58 -- accel/accel.sh@129 -- # build_accel_config 00:07:00.520 14:07:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.520 14:07:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.520 14:07:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.520 14:07:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.520 14:07:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.520 14:07:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.520 14:07:58 -- accel/accel.sh@42 -- # jq -r . 00:07:00.520 ************************************ 00:07:00.520 START TEST accel_dif_functional_tests 00:07:00.520 ************************************ 00:07:00.520 14:07:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:00.520 [2024-11-19 14:07:58.964362] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:00.520 [2024-11-19 14:07:58.964500] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59859 ] 00:07:00.782 [2024-11-19 14:07:59.120408] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:01.043 [2024-11-19 14:07:59.354202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.043 [2024-11-19 14:07:59.354586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.043 [2024-11-19 14:07:59.354634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.304 00:07:01.304 00:07:01.304 CUnit - A unit testing framework for C - Version 2.1-3 00:07:01.304 http://cunit.sourceforge.net/ 00:07:01.304 00:07:01.304 00:07:01.304 Suite: accel_dif 00:07:01.304 Test: verify: DIF generated, GUARD check ...passed 00:07:01.304 Test: verify: DIF generated, APPTAG check ...passed 00:07:01.304 Test: verify: DIF generated, REFTAG check ...passed 00:07:01.304 Test: verify: DIF not generated, GUARD check ...passed 00:07:01.304 Test: verify: DIF not generated, APPTAG check ...[2024-11-19 14:07:59.616737] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:01.304 [2024-11-19 14:07:59.616821] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:01.304 [2024-11-19 14:07:59.616932] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:01.304 passed 00:07:01.304 Test: verify: DIF not generated, REFTAG check ...passed 00:07:01.304 Test: verify: APPTAG correct, APPTAG check ...[2024-11-19 14:07:59.617031] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:01.304 [2024-11-19 14:07:59.617075] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:01.304 [2024-11-19 14:07:59.617102] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:01.304 passed 00:07:01.304 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:07:01.304 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:01.305 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:01.305 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:01.305 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:01.305 Test: generate copy: DIF generated, GUARD check ...passed 00:07:01.305 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:01.305 Test: generate copy: DIF generated, REFTAG check ...[2024-11-19 14:07:59.617241] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:01.305 [2024-11-19 14:07:59.617647] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:01.305 passed 00:07:01.305 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:01.305 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:01.305 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:01.305 Test: generate copy: iovecs-len validate ...passed 00:07:01.305 Test: generate copy: buffer alignment validate ...passed 00:07:01.305 00:07:01.305 Run Summary: Type Total Ran Passed Failed Inactive 00:07:01.305 suites 1 1 n/a 0 0 00:07:01.305 tests 20 20 20 0 0 00:07:01.305 asserts 204 204 204 0 n/a 00:07:01.305 00:07:01.305 Elapsed time = 0.005 seconds 00:07:01.305 [2024-11-19 14:07:59.618228] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:02.251 00:07:02.251 real 0m1.597s 00:07:02.251 user 0m2.891s 00:07:02.251 sys 0m0.239s 00:07:02.251 14:08:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:02.251 ************************************ 00:07:02.251 END TEST accel_dif_functional_tests 00:07:02.251 ************************************ 00:07:02.251 14:08:00 -- common/autotest_common.sh@10 -- # set +x 00:07:02.251 00:07:02.251 real 1m27.184s 00:07:02.251 user 1m35.000s 00:07:02.251 sys 0m6.561s 00:07:02.251 14:08:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:02.251 ************************************ 00:07:02.251 END TEST accel 00:07:02.251 ************************************ 00:07:02.251 14:08:00 -- common/autotest_common.sh@10 -- # set +x 00:07:02.251 14:08:00 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:02.251 14:08:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:02.251 14:08:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:02.251 14:08:00 -- common/autotest_common.sh@10 -- # set +x 00:07:02.251 ************************************ 00:07:02.252 START TEST accel_rpc 00:07:02.252 ************************************ 00:07:02.252 14:08:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:02.252 * Looking for test storage... 00:07:02.252 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:02.252 14:08:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:02.252 14:08:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:02.252 14:08:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:02.252 14:08:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:02.252 14:08:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:02.252 14:08:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:02.252 14:08:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:02.252 14:08:00 -- scripts/common.sh@335 -- # IFS=.-: 00:07:02.252 14:08:00 -- scripts/common.sh@335 -- # read -ra ver1 00:07:02.252 14:08:00 -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.252 14:08:00 -- scripts/common.sh@336 -- # read -ra ver2 00:07:02.252 14:08:00 -- scripts/common.sh@337 -- # local 'op=<' 00:07:02.252 14:08:00 -- scripts/common.sh@339 -- # ver1_l=2 00:07:02.252 14:08:00 -- scripts/common.sh@340 -- # ver2_l=1 00:07:02.252 14:08:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:02.252 14:08:00 -- scripts/common.sh@343 -- # case "$op" in 00:07:02.252 14:08:00 -- scripts/common.sh@344 -- # : 1 00:07:02.252 14:08:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:02.252 14:08:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.252 14:08:00 -- scripts/common.sh@364 -- # decimal 1 00:07:02.252 14:08:00 -- scripts/common.sh@352 -- # local d=1 00:07:02.252 14:08:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.252 14:08:00 -- scripts/common.sh@354 -- # echo 1 00:07:02.252 14:08:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:02.252 14:08:00 -- scripts/common.sh@365 -- # decimal 2 00:07:02.252 14:08:00 -- scripts/common.sh@352 -- # local d=2 00:07:02.252 14:08:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.252 14:08:00 -- scripts/common.sh@354 -- # echo 2 00:07:02.252 14:08:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:02.252 14:08:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:02.252 14:08:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:02.252 14:08:00 -- scripts/common.sh@367 -- # return 0 00:07:02.252 14:08:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.252 14:08:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:02.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.252 --rc genhtml_branch_coverage=1 00:07:02.252 --rc genhtml_function_coverage=1 00:07:02.252 --rc genhtml_legend=1 00:07:02.252 --rc geninfo_all_blocks=1 00:07:02.252 --rc geninfo_unexecuted_blocks=1 00:07:02.252 00:07:02.252 ' 00:07:02.252 14:08:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:02.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.252 --rc genhtml_branch_coverage=1 00:07:02.252 --rc genhtml_function_coverage=1 00:07:02.252 --rc genhtml_legend=1 00:07:02.252 --rc geninfo_all_blocks=1 00:07:02.252 --rc geninfo_unexecuted_blocks=1 00:07:02.252 00:07:02.252 ' 00:07:02.252 14:08:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:02.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.252 --rc genhtml_branch_coverage=1 00:07:02.252 --rc genhtml_function_coverage=1 00:07:02.252 --rc genhtml_legend=1 00:07:02.252 --rc geninfo_all_blocks=1 00:07:02.252 --rc geninfo_unexecuted_blocks=1 00:07:02.252 00:07:02.252 ' 00:07:02.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.252 14:08:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:02.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.252 --rc genhtml_branch_coverage=1 00:07:02.252 --rc genhtml_function_coverage=1 00:07:02.252 --rc genhtml_legend=1 00:07:02.252 --rc geninfo_all_blocks=1 00:07:02.252 --rc geninfo_unexecuted_blocks=1 00:07:02.252 00:07:02.252 ' 00:07:02.252 14:08:00 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:02.252 14:08:00 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=59947 00:07:02.252 14:08:00 -- accel/accel_rpc.sh@15 -- # waitforlisten 59947 00:07:02.252 14:08:00 -- common/autotest_common.sh@829 -- # '[' -z 59947 ']' 00:07:02.252 14:08:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.252 14:08:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:02.252 14:08:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.252 14:08:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:02.252 14:08:00 -- common/autotest_common.sh@10 -- # set +x 00:07:02.252 14:08:00 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:02.514 [2024-11-19 14:08:00.847682] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:02.514 [2024-11-19 14:08:00.848026] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59947 ] 00:07:02.514 [2024-11-19 14:08:01.003397] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.775 [2024-11-19 14:08:01.231973] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:02.775 [2024-11-19 14:08:01.232486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.348 14:08:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:03.348 14:08:01 -- common/autotest_common.sh@862 -- # return 0 00:07:03.348 14:08:01 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:03.348 14:08:01 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:03.348 14:08:01 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:03.348 14:08:01 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:03.348 14:08:01 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:03.348 14:08:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:03.348 14:08:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:03.348 14:08:01 -- common/autotest_common.sh@10 -- # set +x 00:07:03.348 ************************************ 00:07:03.348 START TEST accel_assign_opcode 00:07:03.348 ************************************ 00:07:03.348 14:08:01 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:03.348 14:08:01 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:03.348 14:08:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.348 14:08:01 -- common/autotest_common.sh@10 -- # set +x 00:07:03.348 [2024-11-19 14:08:01.669454] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:03.348 14:08:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.348 14:08:01 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:03.348 14:08:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.348 14:08:01 -- common/autotest_common.sh@10 -- # set +x 00:07:03.348 [2024-11-19 14:08:01.677378] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:03.348 14:08:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.348 14:08:01 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:03.348 14:08:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.348 14:08:01 -- common/autotest_common.sh@10 -- # set +x 00:07:03.920 14:08:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.920 14:08:02 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:03.920 14:08:02 -- accel/accel_rpc.sh@42 -- # grep software 00:07:03.920 14:08:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.920 14:08:02 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:03.920 14:08:02 -- common/autotest_common.sh@10 -- # set +x 00:07:03.920 14:08:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.920 software 00:07:03.920 00:07:03.920 real 0m0.579s 00:07:03.920 user 0m0.028s 00:07:03.920 sys 0m0.014s 00:07:03.920 ************************************ 00:07:03.920 END TEST accel_assign_opcode 00:07:03.920 ************************************ 00:07:03.920 14:08:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:03.920 14:08:02 -- common/autotest_common.sh@10 -- # set +x 00:07:03.920 14:08:02 -- accel/accel_rpc.sh@55 -- # killprocess 59947 00:07:03.920 14:08:02 -- common/autotest_common.sh@936 -- # '[' -z 59947 ']' 00:07:03.920 14:08:02 -- common/autotest_common.sh@940 -- # kill -0 59947 00:07:03.920 14:08:02 -- common/autotest_common.sh@941 -- # uname 00:07:03.920 14:08:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:03.920 14:08:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59947 00:07:03.920 killing process with pid 59947 00:07:03.920 14:08:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:03.920 14:08:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:03.920 14:08:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59947' 00:07:03.920 14:08:02 -- common/autotest_common.sh@955 -- # kill 59947 00:07:03.920 14:08:02 -- common/autotest_common.sh@960 -- # wait 59947 00:07:05.307 00:07:05.307 real 0m3.206s 00:07:05.307 user 0m3.144s 00:07:05.307 sys 0m0.420s 00:07:05.307 14:08:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:05.307 14:08:03 -- common/autotest_common.sh@10 -- # set +x 00:07:05.307 ************************************ 00:07:05.307 END TEST accel_rpc 00:07:05.307 ************************************ 00:07:05.307 14:08:03 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:05.307 14:08:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:05.307 14:08:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.307 14:08:03 -- common/autotest_common.sh@10 -- # set +x 00:07:05.307 ************************************ 00:07:05.307 START TEST app_cmdline 00:07:05.307 ************************************ 00:07:05.307 14:08:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:05.569 * Looking for test storage... 00:07:05.569 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:05.569 14:08:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:05.569 14:08:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:05.569 14:08:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:05.569 14:08:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:05.569 14:08:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:05.569 14:08:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:05.569 14:08:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:05.569 14:08:03 -- scripts/common.sh@335 -- # IFS=.-: 00:07:05.569 14:08:03 -- scripts/common.sh@335 -- # read -ra ver1 00:07:05.569 14:08:03 -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.569 14:08:03 -- scripts/common.sh@336 -- # read -ra ver2 00:07:05.569 14:08:03 -- scripts/common.sh@337 -- # local 'op=<' 00:07:05.569 14:08:03 -- scripts/common.sh@339 -- # ver1_l=2 00:07:05.569 14:08:03 -- scripts/common.sh@340 -- # ver2_l=1 00:07:05.569 14:08:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:05.569 14:08:03 -- scripts/common.sh@343 -- # case "$op" in 00:07:05.569 14:08:03 -- scripts/common.sh@344 -- # : 1 00:07:05.569 14:08:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:05.570 14:08:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.570 14:08:03 -- scripts/common.sh@364 -- # decimal 1 00:07:05.570 14:08:03 -- scripts/common.sh@352 -- # local d=1 00:07:05.570 14:08:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.570 14:08:03 -- scripts/common.sh@354 -- # echo 1 00:07:05.570 14:08:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:05.570 14:08:03 -- scripts/common.sh@365 -- # decimal 2 00:07:05.570 14:08:03 -- scripts/common.sh@352 -- # local d=2 00:07:05.570 14:08:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.570 14:08:03 -- scripts/common.sh@354 -- # echo 2 00:07:05.570 14:08:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:05.570 14:08:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:05.570 14:08:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:05.570 14:08:03 -- scripts/common.sh@367 -- # return 0 00:07:05.570 14:08:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.570 14:08:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:05.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.570 --rc genhtml_branch_coverage=1 00:07:05.570 --rc genhtml_function_coverage=1 00:07:05.570 --rc genhtml_legend=1 00:07:05.570 --rc geninfo_all_blocks=1 00:07:05.570 --rc geninfo_unexecuted_blocks=1 00:07:05.570 00:07:05.570 ' 00:07:05.570 14:08:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:05.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.570 --rc genhtml_branch_coverage=1 00:07:05.570 --rc genhtml_function_coverage=1 00:07:05.570 --rc genhtml_legend=1 00:07:05.570 --rc geninfo_all_blocks=1 00:07:05.570 --rc geninfo_unexecuted_blocks=1 00:07:05.570 00:07:05.570 ' 00:07:05.570 14:08:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:05.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.570 --rc genhtml_branch_coverage=1 00:07:05.570 --rc genhtml_function_coverage=1 00:07:05.570 --rc genhtml_legend=1 00:07:05.570 --rc geninfo_all_blocks=1 00:07:05.570 --rc geninfo_unexecuted_blocks=1 00:07:05.570 00:07:05.570 ' 00:07:05.570 14:08:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:05.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.570 --rc genhtml_branch_coverage=1 00:07:05.570 --rc genhtml_function_coverage=1 00:07:05.570 --rc genhtml_legend=1 00:07:05.570 --rc geninfo_all_blocks=1 00:07:05.570 --rc geninfo_unexecuted_blocks=1 00:07:05.570 00:07:05.570 ' 00:07:05.570 14:08:04 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:05.570 14:08:04 -- app/cmdline.sh@17 -- # spdk_tgt_pid=60060 00:07:05.570 14:08:04 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:05.570 14:08:04 -- app/cmdline.sh@18 -- # waitforlisten 60060 00:07:05.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.570 14:08:04 -- common/autotest_common.sh@829 -- # '[' -z 60060 ']' 00:07:05.570 14:08:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.570 14:08:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:05.570 14:08:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.570 14:08:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:05.570 14:08:04 -- common/autotest_common.sh@10 -- # set +x 00:07:05.570 [2024-11-19 14:08:04.068746] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:05.570 [2024-11-19 14:08:04.068856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60060 ] 00:07:05.832 [2024-11-19 14:08:04.218331] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.832 [2024-11-19 14:08:04.389056] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:05.832 [2024-11-19 14:08:04.389261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.215 14:08:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:07.215 14:08:05 -- common/autotest_common.sh@862 -- # return 0 00:07:07.215 14:08:05 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:07.215 { 00:07:07.215 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:07:07.215 "fields": { 00:07:07.215 "major": 24, 00:07:07.215 "minor": 1, 00:07:07.215 "patch": 1, 00:07:07.215 "suffix": "-pre", 00:07:07.215 "commit": "c13c99a5e" 00:07:07.215 } 00:07:07.215 } 00:07:07.216 14:08:05 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:07.216 14:08:05 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:07.216 14:08:05 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:07.216 14:08:05 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:07.216 14:08:05 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:07.216 14:08:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.216 14:08:05 -- common/autotest_common.sh@10 -- # set +x 00:07:07.216 14:08:05 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:07.216 14:08:05 -- app/cmdline.sh@26 -- # sort 00:07:07.216 14:08:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.477 14:08:05 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:07.477 14:08:05 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:07.477 14:08:05 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:07.477 14:08:05 -- common/autotest_common.sh@650 -- # local es=0 00:07:07.477 14:08:05 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:07.477 14:08:05 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:07.477 14:08:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.477 14:08:05 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:07.477 14:08:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.477 14:08:05 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:07.477 14:08:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.477 14:08:05 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:07.477 14:08:05 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:07.477 14:08:05 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:07.477 request: 00:07:07.477 { 00:07:07.477 "method": "env_dpdk_get_mem_stats", 00:07:07.477 "req_id": 1 00:07:07.477 } 00:07:07.477 Got JSON-RPC error response 00:07:07.477 response: 00:07:07.477 { 00:07:07.477 "code": -32601, 00:07:07.477 "message": "Method not found" 00:07:07.477 } 00:07:07.477 14:08:05 -- common/autotest_common.sh@653 -- # es=1 00:07:07.477 14:08:05 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:07.477 14:08:05 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:07.477 14:08:05 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:07.477 14:08:05 -- app/cmdline.sh@1 -- # killprocess 60060 00:07:07.477 14:08:05 -- common/autotest_common.sh@936 -- # '[' -z 60060 ']' 00:07:07.477 14:08:05 -- common/autotest_common.sh@940 -- # kill -0 60060 00:07:07.477 14:08:05 -- common/autotest_common.sh@941 -- # uname 00:07:07.477 14:08:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:07.477 14:08:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60060 00:07:07.477 killing process with pid 60060 00:07:07.477 14:08:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:07.477 14:08:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:07.477 14:08:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60060' 00:07:07.477 14:08:06 -- common/autotest_common.sh@955 -- # kill 60060 00:07:07.477 14:08:06 -- common/autotest_common.sh@960 -- # wait 60060 00:07:08.864 00:07:08.864 real 0m3.350s 00:07:08.864 user 0m3.786s 00:07:08.864 sys 0m0.423s 00:07:08.864 14:08:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:08.864 ************************************ 00:07:08.864 END TEST app_cmdline 00:07:08.864 ************************************ 00:07:08.864 14:08:07 -- common/autotest_common.sh@10 -- # set +x 00:07:08.864 14:08:07 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:08.864 14:08:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:08.864 14:08:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.864 14:08:07 -- common/autotest_common.sh@10 -- # set +x 00:07:08.864 ************************************ 00:07:08.864 START TEST version 00:07:08.864 ************************************ 00:07:08.864 14:08:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:08.864 * Looking for test storage... 00:07:08.864 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:08.864 14:08:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:08.864 14:08:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:08.864 14:08:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:08.864 14:08:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:08.864 14:08:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:08.864 14:08:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:08.864 14:08:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:08.864 14:08:07 -- scripts/common.sh@335 -- # IFS=.-: 00:07:08.864 14:08:07 -- scripts/common.sh@335 -- # read -ra ver1 00:07:08.864 14:08:07 -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.864 14:08:07 -- scripts/common.sh@336 -- # read -ra ver2 00:07:08.864 14:08:07 -- scripts/common.sh@337 -- # local 'op=<' 00:07:08.864 14:08:07 -- scripts/common.sh@339 -- # ver1_l=2 00:07:08.864 14:08:07 -- scripts/common.sh@340 -- # ver2_l=1 00:07:08.864 14:08:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:08.864 14:08:07 -- scripts/common.sh@343 -- # case "$op" in 00:07:08.864 14:08:07 -- scripts/common.sh@344 -- # : 1 00:07:08.864 14:08:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:08.864 14:08:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.864 14:08:07 -- scripts/common.sh@364 -- # decimal 1 00:07:08.864 14:08:07 -- scripts/common.sh@352 -- # local d=1 00:07:08.864 14:08:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.864 14:08:07 -- scripts/common.sh@354 -- # echo 1 00:07:08.864 14:08:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:08.864 14:08:07 -- scripts/common.sh@365 -- # decimal 2 00:07:08.864 14:08:07 -- scripts/common.sh@352 -- # local d=2 00:07:08.864 14:08:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.864 14:08:07 -- scripts/common.sh@354 -- # echo 2 00:07:08.864 14:08:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:08.864 14:08:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:08.864 14:08:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:08.864 14:08:07 -- scripts/common.sh@367 -- # return 0 00:07:08.864 14:08:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.864 14:08:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:08.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.864 --rc genhtml_branch_coverage=1 00:07:08.865 --rc genhtml_function_coverage=1 00:07:08.865 --rc genhtml_legend=1 00:07:08.865 --rc geninfo_all_blocks=1 00:07:08.865 --rc geninfo_unexecuted_blocks=1 00:07:08.865 00:07:08.865 ' 00:07:08.865 14:08:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:08.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.865 --rc genhtml_branch_coverage=1 00:07:08.865 --rc genhtml_function_coverage=1 00:07:08.865 --rc genhtml_legend=1 00:07:08.865 --rc geninfo_all_blocks=1 00:07:08.865 --rc geninfo_unexecuted_blocks=1 00:07:08.865 00:07:08.865 ' 00:07:08.865 14:08:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:08.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.865 --rc genhtml_branch_coverage=1 00:07:08.865 --rc genhtml_function_coverage=1 00:07:08.865 --rc genhtml_legend=1 00:07:08.865 --rc geninfo_all_blocks=1 00:07:08.865 --rc geninfo_unexecuted_blocks=1 00:07:08.865 00:07:08.865 ' 00:07:08.865 14:08:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:08.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.865 --rc genhtml_branch_coverage=1 00:07:08.865 --rc genhtml_function_coverage=1 00:07:08.865 --rc genhtml_legend=1 00:07:08.865 --rc geninfo_all_blocks=1 00:07:08.865 --rc geninfo_unexecuted_blocks=1 00:07:08.865 00:07:08.865 ' 00:07:08.865 14:08:07 -- app/version.sh@17 -- # get_header_version major 00:07:08.865 14:08:07 -- app/version.sh@14 -- # cut -f2 00:07:08.865 14:08:07 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:08.865 14:08:07 -- app/version.sh@14 -- # tr -d '"' 00:07:08.865 14:08:07 -- app/version.sh@17 -- # major=24 00:07:08.865 14:08:07 -- app/version.sh@18 -- # get_header_version minor 00:07:08.865 14:08:07 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:08.865 14:08:07 -- app/version.sh@14 -- # cut -f2 00:07:08.865 14:08:07 -- app/version.sh@14 -- # tr -d '"' 00:07:08.865 14:08:07 -- app/version.sh@18 -- # minor=1 00:07:08.865 14:08:07 -- app/version.sh@19 -- # get_header_version patch 00:07:08.865 14:08:07 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:08.865 14:08:07 -- app/version.sh@14 -- # cut -f2 00:07:08.865 14:08:07 -- app/version.sh@14 -- # tr -d '"' 00:07:08.865 14:08:07 -- app/version.sh@19 -- # patch=1 00:07:08.865 14:08:07 -- app/version.sh@20 -- # get_header_version suffix 00:07:08.865 14:08:07 -- app/version.sh@14 -- # cut -f2 00:07:08.865 14:08:07 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:08.865 14:08:07 -- app/version.sh@14 -- # tr -d '"' 00:07:08.865 14:08:07 -- app/version.sh@20 -- # suffix=-pre 00:07:08.865 14:08:07 -- app/version.sh@22 -- # version=24.1 00:07:08.865 14:08:07 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:08.865 14:08:07 -- app/version.sh@25 -- # version=24.1.1 00:07:08.865 14:08:07 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:08.865 14:08:07 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:08.865 14:08:07 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:09.125 14:08:07 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:09.125 14:08:07 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:09.125 00:07:09.125 real 0m0.195s 00:07:09.125 user 0m0.123s 00:07:09.125 sys 0m0.099s 00:07:09.125 ************************************ 00:07:09.125 END TEST version 00:07:09.125 ************************************ 00:07:09.125 14:08:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:09.125 14:08:07 -- common/autotest_common.sh@10 -- # set +x 00:07:09.125 14:08:07 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:09.125 14:08:07 -- spdk/autotest.sh@191 -- # uname -s 00:07:09.125 14:08:07 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:09.125 14:08:07 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:09.125 14:08:07 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:09.125 14:08:07 -- spdk/autotest.sh@204 -- # '[' 1 -eq 1 ']' 00:07:09.125 14:08:07 -- spdk/autotest.sh@205 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:09.125 14:08:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:09.125 14:08:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.125 14:08:07 -- common/autotest_common.sh@10 -- # set +x 00:07:09.125 ************************************ 00:07:09.125 START TEST blockdev_nvme 00:07:09.125 ************************************ 00:07:09.125 14:08:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:09.125 * Looking for test storage... 00:07:09.125 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:09.125 14:08:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:09.125 14:08:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:09.125 14:08:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:09.125 14:08:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:09.125 14:08:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:09.125 14:08:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:09.125 14:08:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:09.125 14:08:07 -- scripts/common.sh@335 -- # IFS=.-: 00:07:09.125 14:08:07 -- scripts/common.sh@335 -- # read -ra ver1 00:07:09.125 14:08:07 -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.125 14:08:07 -- scripts/common.sh@336 -- # read -ra ver2 00:07:09.125 14:08:07 -- scripts/common.sh@337 -- # local 'op=<' 00:07:09.125 14:08:07 -- scripts/common.sh@339 -- # ver1_l=2 00:07:09.125 14:08:07 -- scripts/common.sh@340 -- # ver2_l=1 00:07:09.125 14:08:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:09.125 14:08:07 -- scripts/common.sh@343 -- # case "$op" in 00:07:09.125 14:08:07 -- scripts/common.sh@344 -- # : 1 00:07:09.125 14:08:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:09.125 14:08:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.125 14:08:07 -- scripts/common.sh@364 -- # decimal 1 00:07:09.125 14:08:07 -- scripts/common.sh@352 -- # local d=1 00:07:09.125 14:08:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.125 14:08:07 -- scripts/common.sh@354 -- # echo 1 00:07:09.125 14:08:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:09.125 14:08:07 -- scripts/common.sh@365 -- # decimal 2 00:07:09.125 14:08:07 -- scripts/common.sh@352 -- # local d=2 00:07:09.125 14:08:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.126 14:08:07 -- scripts/common.sh@354 -- # echo 2 00:07:09.126 14:08:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:09.126 14:08:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:09.126 14:08:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:09.126 14:08:07 -- scripts/common.sh@367 -- # return 0 00:07:09.126 14:08:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.126 14:08:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:09.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.126 --rc genhtml_branch_coverage=1 00:07:09.126 --rc genhtml_function_coverage=1 00:07:09.126 --rc genhtml_legend=1 00:07:09.126 --rc geninfo_all_blocks=1 00:07:09.126 --rc geninfo_unexecuted_blocks=1 00:07:09.126 00:07:09.126 ' 00:07:09.126 14:08:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:09.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.126 --rc genhtml_branch_coverage=1 00:07:09.126 --rc genhtml_function_coverage=1 00:07:09.126 --rc genhtml_legend=1 00:07:09.126 --rc geninfo_all_blocks=1 00:07:09.126 --rc geninfo_unexecuted_blocks=1 00:07:09.126 00:07:09.126 ' 00:07:09.126 14:08:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:09.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.126 --rc genhtml_branch_coverage=1 00:07:09.126 --rc genhtml_function_coverage=1 00:07:09.126 --rc genhtml_legend=1 00:07:09.126 --rc geninfo_all_blocks=1 00:07:09.126 --rc geninfo_unexecuted_blocks=1 00:07:09.126 00:07:09.126 ' 00:07:09.126 14:08:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:09.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.126 --rc genhtml_branch_coverage=1 00:07:09.126 --rc genhtml_function_coverage=1 00:07:09.126 --rc genhtml_legend=1 00:07:09.126 --rc geninfo_all_blocks=1 00:07:09.126 --rc geninfo_unexecuted_blocks=1 00:07:09.126 00:07:09.126 ' 00:07:09.126 14:08:07 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:09.126 14:08:07 -- bdev/nbd_common.sh@6 -- # set -e 00:07:09.126 14:08:07 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:09.126 14:08:07 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:09.126 14:08:07 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:09.126 14:08:07 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:09.126 14:08:07 -- bdev/blockdev.sh@18 -- # : 00:07:09.126 14:08:07 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:07:09.126 14:08:07 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:07:09.126 14:08:07 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:07:09.126 14:08:07 -- bdev/blockdev.sh@672 -- # uname -s 00:07:09.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.126 14:08:07 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:07:09.126 14:08:07 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:07:09.126 14:08:07 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:07:09.126 14:08:07 -- bdev/blockdev.sh@681 -- # crypto_device= 00:07:09.126 14:08:07 -- bdev/blockdev.sh@682 -- # dek= 00:07:09.126 14:08:07 -- bdev/blockdev.sh@683 -- # env_ctx= 00:07:09.126 14:08:07 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:07:09.126 14:08:07 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:07:09.126 14:08:07 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:07:09.126 14:08:07 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:07:09.126 14:08:07 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:07:09.126 14:08:07 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=60238 00:07:09.126 14:08:07 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:09.126 14:08:07 -- bdev/blockdev.sh@47 -- # waitforlisten 60238 00:07:09.126 14:08:07 -- common/autotest_common.sh@829 -- # '[' -z 60238 ']' 00:07:09.126 14:08:07 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:09.126 14:08:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.126 14:08:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.126 14:08:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.126 14:08:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.126 14:08:07 -- common/autotest_common.sh@10 -- # set +x 00:07:09.386 [2024-11-19 14:08:07.700803] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:09.386 [2024-11-19 14:08:07.701071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60238 ] 00:07:09.386 [2024-11-19 14:08:07.851004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.645 [2024-11-19 14:08:08.021182] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:09.645 [2024-11-19 14:08:08.021518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.069 14:08:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:11.070 14:08:09 -- common/autotest_common.sh@862 -- # return 0 00:07:11.070 14:08:09 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:07:11.070 14:08:09 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:07:11.070 14:08:09 -- bdev/blockdev.sh@79 -- # local json 00:07:11.070 14:08:09 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:07:11.070 14:08:09 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:11.070 14:08:09 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:07:11.070 14:08:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.070 14:08:09 -- common/autotest_common.sh@10 -- # set +x 00:07:11.070 14:08:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.070 14:08:09 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:07:11.070 14:08:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.070 14:08:09 -- common/autotest_common.sh@10 -- # set +x 00:07:11.070 14:08:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.070 14:08:09 -- bdev/blockdev.sh@738 -- # cat 00:07:11.070 14:08:09 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:07:11.070 14:08:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.070 14:08:09 -- common/autotest_common.sh@10 -- # set +x 00:07:11.070 14:08:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.070 14:08:09 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:07:11.070 14:08:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.070 14:08:09 -- common/autotest_common.sh@10 -- # set +x 00:07:11.070 14:08:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.070 14:08:09 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:11.070 14:08:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.070 14:08:09 -- common/autotest_common.sh@10 -- # set +x 00:07:11.070 14:08:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.070 14:08:09 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:07:11.070 14:08:09 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:07:11.070 14:08:09 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:07:11.070 14:08:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.070 14:08:09 -- common/autotest_common.sh@10 -- # set +x 00:07:11.070 14:08:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.070 14:08:09 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:07:11.070 14:08:09 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "edb1c6a2-15b5-4ac7-8aec-76459dc2e719"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "edb1c6a2-15b5-4ac7-8aec-76459dc2e719",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "fffbead9-9c78-4c91-a0ca-8a875db43736"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "fffbead9-9c78-4c91-a0ca-8a875db43736",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "9c7d790b-1fba-4f3c-8e41-930cd81d395d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9c7d790b-1fba-4f3c-8e41-930cd81d395d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "e1d7ac0f-0a54-4ee3-8355-ee67f715482b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e1d7ac0f-0a54-4ee3-8355-ee67f715482b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "5168ea9c-7887-4932-8183-52c78809e268"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5168ea9c-7887-4932-8183-52c78809e268",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "e332def8-1e95-4080-81b0-b7ffc2999887"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "e332def8-1e95-4080-81b0-b7ffc2999887",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:11.070 14:08:09 -- bdev/blockdev.sh@747 -- # jq -r .name 00:07:11.331 14:08:09 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:07:11.331 14:08:09 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:07:11.331 14:08:09 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:07:11.331 14:08:09 -- bdev/blockdev.sh@752 -- # killprocess 60238 00:07:11.331 14:08:09 -- common/autotest_common.sh@936 -- # '[' -z 60238 ']' 00:07:11.331 14:08:09 -- common/autotest_common.sh@940 -- # kill -0 60238 00:07:11.331 14:08:09 -- common/autotest_common.sh@941 -- # uname 00:07:11.331 14:08:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:11.331 14:08:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60238 00:07:11.331 killing process with pid 60238 00:07:11.331 14:08:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:11.331 14:08:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:11.331 14:08:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60238' 00:07:11.331 14:08:09 -- common/autotest_common.sh@955 -- # kill 60238 00:07:11.331 14:08:09 -- common/autotest_common.sh@960 -- # wait 60238 00:07:12.717 14:08:10 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:12.717 14:08:10 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:12.718 14:08:10 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:12.718 14:08:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.718 14:08:10 -- common/autotest_common.sh@10 -- # set +x 00:07:12.718 ************************************ 00:07:12.718 START TEST bdev_hello_world 00:07:12.718 ************************************ 00:07:12.718 14:08:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:12.718 [2024-11-19 14:08:10.969151] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:12.718 [2024-11-19 14:08:10.969325] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60324 ] 00:07:12.718 [2024-11-19 14:08:11.104040] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.718 [2024-11-19 14:08:11.245131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.289 [2024-11-19 14:08:11.706865] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:13.289 [2024-11-19 14:08:11.707036] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:13.289 [2024-11-19 14:08:11.707069] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:13.289 [2024-11-19 14:08:11.709127] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:13.289 [2024-11-19 14:08:11.709477] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:13.289 [2024-11-19 14:08:11.709544] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:13.289 [2024-11-19 14:08:11.709672] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:13.289 00:07:13.289 [2024-11-19 14:08:11.709734] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:13.861 00:07:13.861 ************************************ 00:07:13.861 END TEST bdev_hello_world 00:07:13.861 ************************************ 00:07:13.861 real 0m1.411s 00:07:13.861 user 0m1.145s 00:07:13.861 sys 0m0.161s 00:07:13.861 14:08:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:13.861 14:08:12 -- common/autotest_common.sh@10 -- # set +x 00:07:13.861 14:08:12 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:07:13.861 14:08:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:13.861 14:08:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.861 14:08:12 -- common/autotest_common.sh@10 -- # set +x 00:07:13.861 ************************************ 00:07:13.861 START TEST bdev_bounds 00:07:13.861 ************************************ 00:07:13.861 14:08:12 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:07:13.861 14:08:12 -- bdev/blockdev.sh@288 -- # bdevio_pid=60366 00:07:13.861 14:08:12 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:13.861 Process bdevio pid: 60366 00:07:13.861 14:08:12 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 60366' 00:07:13.861 14:08:12 -- bdev/blockdev.sh@291 -- # waitforlisten 60366 00:07:13.861 14:08:12 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:13.861 14:08:12 -- common/autotest_common.sh@829 -- # '[' -z 60366 ']' 00:07:13.861 14:08:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.861 14:08:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:13.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.861 14:08:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.861 14:08:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:13.861 14:08:12 -- common/autotest_common.sh@10 -- # set +x 00:07:14.122 [2024-11-19 14:08:12.428336] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:14.122 [2024-11-19 14:08:12.428442] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60366 ] 00:07:14.122 [2024-11-19 14:08:12.574586] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:14.384 [2024-11-19 14:08:12.714724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.384 [2024-11-19 14:08:12.715005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.384 [2024-11-19 14:08:12.715030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.956 14:08:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:14.956 14:08:13 -- common/autotest_common.sh@862 -- # return 0 00:07:14.956 14:08:13 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:14.956 I/O targets: 00:07:14.956 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:14.956 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:14.956 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:14.956 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:14.956 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:14.956 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:14.956 00:07:14.956 00:07:14.956 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.956 http://cunit.sourceforge.net/ 00:07:14.956 00:07:14.956 00:07:14.956 Suite: bdevio tests on: Nvme3n1 00:07:14.956 Test: blockdev write read block ...passed 00:07:14.956 Test: blockdev write zeroes read block ...passed 00:07:14.956 Test: blockdev write zeroes read no split ...passed 00:07:14.956 Test: blockdev write zeroes read split ...passed 00:07:14.956 Test: blockdev write zeroes read split partial ...passed 00:07:14.956 Test: blockdev reset ...[2024-11-19 14:08:13.388958] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:07:14.956 [2024-11-19 14:08:13.391367] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:14.956 passed 00:07:14.956 Test: blockdev write read 8 blocks ...passed 00:07:14.956 Test: blockdev write read size > 128k ...passed 00:07:14.956 Test: blockdev write read invalid size ...passed 00:07:14.956 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:14.956 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:14.956 Test: blockdev write read max offset ...passed 00:07:14.956 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:14.956 Test: blockdev writev readv 8 blocks ...passed 00:07:14.956 Test: blockdev writev readv 30 x 1block ...passed 00:07:14.956 Test: blockdev writev readv block ...passed 00:07:14.956 Test: blockdev writev readv size > 128k ...passed 00:07:14.956 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:14.956 Test: blockdev comparev and writev ...[2024-11-19 14:08:13.398223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27240e000 len:0x1000 00:07:14.956 [2024-11-19 14:08:13.398271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:14.956 passed 00:07:14.956 Test: blockdev nvme passthru rw ...passed 00:07:14.957 Test: blockdev nvme passthru vendor specific ...passed 00:07:14.957 Test: blockdev nvme admin passthru ...[2024-11-19 14:08:13.398953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:14.957 [2024-11-19 14:08:13.398980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:14.957 passed 00:07:14.957 Test: blockdev copy ...passed 00:07:14.957 Suite: bdevio tests on: Nvme2n3 00:07:14.957 Test: blockdev write read block ...passed 00:07:14.957 Test: blockdev write zeroes read block ...passed 00:07:14.957 Test: blockdev write zeroes read no split ...passed 00:07:14.957 Test: blockdev write zeroes read split ...passed 00:07:14.957 Test: blockdev write zeroes read split partial ...passed 00:07:14.957 Test: blockdev reset ...[2024-11-19 14:08:13.442138] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:14.957 [2024-11-19 14:08:13.444659] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:14.957 passed 00:07:14.957 Test: blockdev write read 8 blocks ...passed 00:07:14.957 Test: blockdev write read size > 128k ...passed 00:07:14.957 Test: blockdev write read invalid size ...passed 00:07:14.957 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:14.957 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:14.957 Test: blockdev write read max offset ...passed 00:07:14.957 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:14.957 Test: blockdev writev readv 8 blocks ...passed 00:07:14.957 Test: blockdev writev readv 30 x 1block ...passed 00:07:14.957 Test: blockdev writev readv block ...passed 00:07:14.957 Test: blockdev writev readv size > 128k ...passed 00:07:14.957 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:14.957 Test: blockdev comparev and writev ...[2024-11-19 14:08:13.451296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27240a000 len:0x1000 00:07:14.957 [2024-11-19 14:08:13.451334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:14.957 passed 00:07:14.957 Test: blockdev nvme passthru rw ...passed 00:07:14.957 Test: blockdev nvme passthru vendor specific ...passed 00:07:14.957 Test: blockdev nvme admin passthru ...[2024-11-19 14:08:13.452014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:14.957 [2024-11-19 14:08:13.452037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:14.957 passed 00:07:14.957 Test: blockdev copy ...passed 00:07:14.957 Suite: bdevio tests on: Nvme2n2 00:07:14.957 Test: blockdev write read block ...passed 00:07:14.957 Test: blockdev write zeroes read block ...passed 00:07:14.957 Test: blockdev write zeroes read no split ...passed 00:07:14.957 Test: blockdev write zeroes read split ...passed 00:07:14.957 Test: blockdev write zeroes read split partial ...passed 00:07:14.957 Test: blockdev reset ...[2024-11-19 14:08:13.494466] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:14.957 [2024-11-19 14:08:13.496986] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:14.957 passed 00:07:14.957 Test: blockdev write read 8 blocks ...passed 00:07:14.957 Test: blockdev write read size > 128k ...passed 00:07:14.957 Test: blockdev write read invalid size ...passed 00:07:14.957 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:14.957 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:14.957 Test: blockdev write read max offset ...passed 00:07:14.957 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:14.957 Test: blockdev writev readv 8 blocks ...passed 00:07:14.957 Test: blockdev writev readv 30 x 1block ...passed 00:07:14.957 Test: blockdev writev readv block ...passed 00:07:14.957 Test: blockdev writev readv size > 128k ...passed 00:07:14.957 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:14.957 Test: blockdev comparev and writev ...[2024-11-19 14:08:13.503387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x271c06000 len:0x1000 00:07:14.957 passed 00:07:14.957 Test: blockdev nvme passthru rw ...[2024-11-19 14:08:13.503422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:14.957 passed 00:07:14.957 Test: blockdev nvme passthru vendor specific ...passed 00:07:14.957 Test: blockdev nvme admin passthru ...[2024-11-19 14:08:13.503957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:14.957 [2024-11-19 14:08:13.503983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:14.957 passed 00:07:14.957 Test: blockdev copy ...passed 00:07:14.957 Suite: bdevio tests on: Nvme2n1 00:07:14.957 Test: blockdev write read block ...passed 00:07:14.957 Test: blockdev write zeroes read block ...passed 00:07:14.957 Test: blockdev write zeroes read no split ...passed 00:07:15.218 Test: blockdev write zeroes read split ...passed 00:07:15.218 Test: blockdev write zeroes read split partial ...passed 00:07:15.218 Test: blockdev reset ...[2024-11-19 14:08:13.548834] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:15.218 [2024-11-19 14:08:13.551378] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:15.218 passed 00:07:15.218 Test: blockdev write read 8 blocks ...passed 00:07:15.218 Test: blockdev write read size > 128k ...passed 00:07:15.218 Test: blockdev write read invalid size ...passed 00:07:15.218 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:15.218 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:15.218 Test: blockdev write read max offset ...passed 00:07:15.218 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:15.218 Test: blockdev writev readv 8 blocks ...passed 00:07:15.218 Test: blockdev writev readv 30 x 1block ...passed 00:07:15.218 Test: blockdev writev readv block ...passed 00:07:15.218 Test: blockdev writev readv size > 128k ...passed 00:07:15.218 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:15.218 Test: blockdev comparev and writev ...[2024-11-19 14:08:13.557816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x271c01000 len:0x1000 00:07:15.218 [2024-11-19 14:08:13.557853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:15.218 passed 00:07:15.218 Test: blockdev nvme passthru rw ...passed 00:07:15.218 Test: blockdev nvme passthru vendor specific ...passed 00:07:15.218 Test: blockdev nvme admin passthru ...[2024-11-19 14:08:13.558391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:15.218 [2024-11-19 14:08:13.558416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:15.218 passed 00:07:15.219 Test: blockdev copy ...passed 00:07:15.219 Suite: bdevio tests on: Nvme1n1 00:07:15.219 Test: blockdev write read block ...passed 00:07:15.219 Test: blockdev write zeroes read block ...passed 00:07:15.219 Test: blockdev write zeroes read no split ...passed 00:07:15.219 Test: blockdev write zeroes read split ...passed 00:07:15.219 Test: blockdev write zeroes read split partial ...passed 00:07:15.219 Test: blockdev reset ...[2024-11-19 14:08:13.600749] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:07:15.219 [2024-11-19 14:08:13.603062] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:15.219 passed 00:07:15.219 Test: blockdev write read 8 blocks ...passed 00:07:15.219 Test: blockdev write read size > 128k ...passed 00:07:15.219 Test: blockdev write read invalid size ...passed 00:07:15.219 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:15.219 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:15.219 Test: blockdev write read max offset ...passed 00:07:15.219 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:15.219 Test: blockdev writev readv 8 blocks ...passed 00:07:15.219 Test: blockdev writev readv 30 x 1block ...passed 00:07:15.219 Test: blockdev writev readv block ...passed 00:07:15.219 Test: blockdev writev readv size > 128k ...passed 00:07:15.219 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:15.219 Test: blockdev comparev and writev ...[2024-11-19 14:08:13.608812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26ce06000 len:0x1000 00:07:15.219 [2024-11-19 14:08:13.608847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:15.219 passed 00:07:15.219 Test: blockdev nvme passthru rw ...passed 00:07:15.219 Test: blockdev nvme passthru vendor specific ...passed 00:07:15.219 Test: blockdev nvme admin passthru ...[2024-11-19 14:08:13.609484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:15.219 [2024-11-19 14:08:13.609506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:15.219 passed 00:07:15.219 Test: blockdev copy ...passed 00:07:15.219 Suite: bdevio tests on: Nvme0n1 00:07:15.219 Test: blockdev write read block ...passed 00:07:15.219 Test: blockdev write zeroes read block ...passed 00:07:15.219 Test: blockdev write zeroes read no split ...passed 00:07:15.219 Test: blockdev write zeroes read split ...passed 00:07:15.219 Test: blockdev write zeroes read split partial ...passed 00:07:15.219 Test: blockdev reset ...[2024-11-19 14:08:13.653254] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:07:15.219 [2024-11-19 14:08:13.655534] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:15.219 passed 00:07:15.219 Test: blockdev write read 8 blocks ...passed 00:07:15.219 Test: blockdev write read size > 128k ...passed 00:07:15.219 Test: blockdev write read invalid size ...passed 00:07:15.219 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:15.219 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:15.219 Test: blockdev write read max offset ...passed 00:07:15.219 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:15.219 Test: blockdev writev readv 8 blocks ...passed 00:07:15.219 Test: blockdev writev readv 30 x 1block ...passed 00:07:15.219 Test: blockdev writev readv block ...passed 00:07:15.219 Test: blockdev writev readv size > 128k ...passed 00:07:15.219 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:15.219 Test: blockdev comparev and writev ...[2024-11-19 14:08:13.661415] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:15.219 separate metadata which is not supported yet. 00:07:15.219 passed 00:07:15.219 Test: blockdev nvme passthru rw ...passed 00:07:15.219 Test: blockdev nvme passthru vendor specific ...passed 00:07:15.219 Test: blockdev nvme admin passthru ...[2024-11-19 14:08:13.661776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:15.219 [2024-11-19 14:08:13.661807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:15.219 passed 00:07:15.219 Test: blockdev copy ...passed 00:07:15.219 00:07:15.219 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.219 suites 6 6 n/a 0 0 00:07:15.219 tests 138 138 138 0 0 00:07:15.219 asserts 893 893 893 0 n/a 00:07:15.219 00:07:15.219 Elapsed time = 0.895 seconds 00:07:15.219 0 00:07:15.219 14:08:13 -- bdev/blockdev.sh@293 -- # killprocess 60366 00:07:15.219 14:08:13 -- common/autotest_common.sh@936 -- # '[' -z 60366 ']' 00:07:15.219 14:08:13 -- common/autotest_common.sh@940 -- # kill -0 60366 00:07:15.219 14:08:13 -- common/autotest_common.sh@941 -- # uname 00:07:15.219 14:08:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:15.219 14:08:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60366 00:07:15.219 14:08:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:15.219 14:08:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:15.219 killing process with pid 60366 00:07:15.219 14:08:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60366' 00:07:15.219 14:08:13 -- common/autotest_common.sh@955 -- # kill 60366 00:07:15.219 14:08:13 -- common/autotest_common.sh@960 -- # wait 60366 00:07:15.792 14:08:14 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:07:15.792 00:07:15.792 real 0m1.888s 00:07:15.792 user 0m4.698s 00:07:15.792 sys 0m0.249s 00:07:15.792 ************************************ 00:07:15.792 END TEST bdev_bounds 00:07:15.792 ************************************ 00:07:15.792 14:08:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:15.792 14:08:14 -- common/autotest_common.sh@10 -- # set +x 00:07:15.792 14:08:14 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:15.792 14:08:14 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:07:15.792 14:08:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:15.792 14:08:14 -- common/autotest_common.sh@10 -- # set +x 00:07:15.792 ************************************ 00:07:15.792 START TEST bdev_nbd 00:07:15.792 ************************************ 00:07:15.792 14:08:14 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:15.792 14:08:14 -- bdev/blockdev.sh@298 -- # uname -s 00:07:15.792 14:08:14 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:07:15.792 14:08:14 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.792 14:08:14 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:15.792 14:08:14 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:15.792 14:08:14 -- bdev/blockdev.sh@302 -- # local bdev_all 00:07:15.792 14:08:14 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:07:15.792 14:08:14 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:07:15.792 14:08:14 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:15.792 14:08:14 -- bdev/blockdev.sh@309 -- # local nbd_all 00:07:15.792 14:08:14 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:07:15.792 14:08:14 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:15.792 14:08:14 -- bdev/blockdev.sh@312 -- # local nbd_list 00:07:15.792 14:08:14 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:15.792 14:08:14 -- bdev/blockdev.sh@313 -- # local bdev_list 00:07:15.792 14:08:14 -- bdev/blockdev.sh@316 -- # nbd_pid=60415 00:07:15.792 14:08:14 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:15.792 14:08:14 -- bdev/blockdev.sh@318 -- # waitforlisten 60415 /var/tmp/spdk-nbd.sock 00:07:15.792 14:08:14 -- common/autotest_common.sh@829 -- # '[' -z 60415 ']' 00:07:15.792 14:08:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:15.792 14:08:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:15.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:15.792 14:08:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:15.792 14:08:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:15.792 14:08:14 -- common/autotest_common.sh@10 -- # set +x 00:07:15.792 14:08:14 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:16.053 [2024-11-19 14:08:14.353156] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:16.053 [2024-11-19 14:08:14.353234] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.053 [2024-11-19 14:08:14.494867] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.315 [2024-11-19 14:08:14.648070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.887 14:08:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:16.887 14:08:15 -- common/autotest_common.sh@862 -- # return 0 00:07:16.887 14:08:15 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:16.887 14:08:15 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.887 14:08:15 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:16.887 14:08:15 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:16.887 14:08:15 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:16.888 14:08:15 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.888 14:08:15 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:16.888 14:08:15 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:16.888 14:08:15 -- bdev/nbd_common.sh@24 -- # local i 00:07:16.888 14:08:15 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:16.888 14:08:15 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:16.888 14:08:15 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:16.888 14:08:15 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:16.888 14:08:15 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:16.888 14:08:15 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:16.888 14:08:15 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:16.888 14:08:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:16.888 14:08:15 -- common/autotest_common.sh@867 -- # local i 00:07:16.888 14:08:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:16.888 14:08:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:16.888 14:08:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:16.888 14:08:15 -- common/autotest_common.sh@871 -- # break 00:07:16.888 14:08:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:16.888 14:08:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:16.888 14:08:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.888 1+0 records in 00:07:16.888 1+0 records out 00:07:16.888 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248566 s, 16.5 MB/s 00:07:16.888 14:08:15 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.888 14:08:15 -- common/autotest_common.sh@884 -- # size=4096 00:07:16.888 14:08:15 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.888 14:08:15 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:16.888 14:08:15 -- common/autotest_common.sh@887 -- # return 0 00:07:16.888 14:08:15 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:16.888 14:08:15 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:16.888 14:08:15 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:17.148 14:08:15 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:17.148 14:08:15 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:17.148 14:08:15 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:17.148 14:08:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:17.148 14:08:15 -- common/autotest_common.sh@867 -- # local i 00:07:17.148 14:08:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:17.148 14:08:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:17.148 14:08:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:17.148 14:08:15 -- common/autotest_common.sh@871 -- # break 00:07:17.148 14:08:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:17.148 14:08:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:17.148 14:08:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.148 1+0 records in 00:07:17.148 1+0 records out 00:07:17.148 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473744 s, 8.6 MB/s 00:07:17.148 14:08:15 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.148 14:08:15 -- common/autotest_common.sh@884 -- # size=4096 00:07:17.148 14:08:15 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.148 14:08:15 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:17.148 14:08:15 -- common/autotest_common.sh@887 -- # return 0 00:07:17.148 14:08:15 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:17.148 14:08:15 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:17.148 14:08:15 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:17.409 14:08:15 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:17.409 14:08:15 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:17.409 14:08:15 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:17.409 14:08:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:07:17.409 14:08:15 -- common/autotest_common.sh@867 -- # local i 00:07:17.409 14:08:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:17.409 14:08:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:17.409 14:08:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:07:17.409 14:08:15 -- common/autotest_common.sh@871 -- # break 00:07:17.409 14:08:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:17.409 14:08:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:17.409 14:08:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.410 1+0 records in 00:07:17.410 1+0 records out 00:07:17.410 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000523246 s, 7.8 MB/s 00:07:17.410 14:08:15 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.410 14:08:15 -- common/autotest_common.sh@884 -- # size=4096 00:07:17.410 14:08:15 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.410 14:08:15 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:17.410 14:08:15 -- common/autotest_common.sh@887 -- # return 0 00:07:17.410 14:08:15 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:17.410 14:08:15 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:17.410 14:08:15 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:17.671 14:08:16 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:17.671 14:08:16 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:17.671 14:08:16 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:17.671 14:08:16 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:07:17.671 14:08:16 -- common/autotest_common.sh@867 -- # local i 00:07:17.671 14:08:16 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:17.671 14:08:16 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:17.671 14:08:16 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:07:17.671 14:08:16 -- common/autotest_common.sh@871 -- # break 00:07:17.671 14:08:16 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:17.671 14:08:16 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:17.671 14:08:16 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.671 1+0 records in 00:07:17.671 1+0 records out 00:07:17.671 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0012711 s, 3.2 MB/s 00:07:17.671 14:08:16 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.671 14:08:16 -- common/autotest_common.sh@884 -- # size=4096 00:07:17.671 14:08:16 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.671 14:08:16 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:17.671 14:08:16 -- common/autotest_common.sh@887 -- # return 0 00:07:17.671 14:08:16 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:17.671 14:08:16 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:17.671 14:08:16 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:17.932 14:08:16 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:17.932 14:08:16 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:17.932 14:08:16 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:17.932 14:08:16 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:07:17.932 14:08:16 -- common/autotest_common.sh@867 -- # local i 00:07:17.932 14:08:16 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:17.932 14:08:16 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:17.932 14:08:16 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:07:17.932 14:08:16 -- common/autotest_common.sh@871 -- # break 00:07:17.932 14:08:16 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:17.932 14:08:16 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:17.932 14:08:16 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.932 1+0 records in 00:07:17.932 1+0 records out 00:07:17.932 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000788721 s, 5.2 MB/s 00:07:17.932 14:08:16 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.932 14:08:16 -- common/autotest_common.sh@884 -- # size=4096 00:07:17.932 14:08:16 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.932 14:08:16 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:17.932 14:08:16 -- common/autotest_common.sh@887 -- # return 0 00:07:17.932 14:08:16 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:17.932 14:08:16 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:17.932 14:08:16 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:18.194 14:08:16 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:18.194 14:08:16 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:18.194 14:08:16 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:18.194 14:08:16 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:07:18.194 14:08:16 -- common/autotest_common.sh@867 -- # local i 00:07:18.194 14:08:16 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:18.194 14:08:16 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:18.194 14:08:16 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:07:18.194 14:08:16 -- common/autotest_common.sh@871 -- # break 00:07:18.194 14:08:16 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:18.194 14:08:16 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:18.194 14:08:16 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.194 1+0 records in 00:07:18.194 1+0 records out 00:07:18.194 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113972 s, 3.6 MB/s 00:07:18.194 14:08:16 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.194 14:08:16 -- common/autotest_common.sh@884 -- # size=4096 00:07:18.194 14:08:16 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.194 14:08:16 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:18.194 14:08:16 -- common/autotest_common.sh@887 -- # return 0 00:07:18.194 14:08:16 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:18.194 14:08:16 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:18.194 14:08:16 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:18.455 14:08:16 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:18.455 { 00:07:18.455 "nbd_device": "/dev/nbd0", 00:07:18.455 "bdev_name": "Nvme0n1" 00:07:18.455 }, 00:07:18.455 { 00:07:18.455 "nbd_device": "/dev/nbd1", 00:07:18.455 "bdev_name": "Nvme1n1" 00:07:18.455 }, 00:07:18.455 { 00:07:18.455 "nbd_device": "/dev/nbd2", 00:07:18.455 "bdev_name": "Nvme2n1" 00:07:18.455 }, 00:07:18.455 { 00:07:18.455 "nbd_device": "/dev/nbd3", 00:07:18.455 "bdev_name": "Nvme2n2" 00:07:18.455 }, 00:07:18.455 { 00:07:18.455 "nbd_device": "/dev/nbd4", 00:07:18.455 "bdev_name": "Nvme2n3" 00:07:18.455 }, 00:07:18.455 { 00:07:18.455 "nbd_device": "/dev/nbd5", 00:07:18.455 "bdev_name": "Nvme3n1" 00:07:18.455 } 00:07:18.455 ]' 00:07:18.455 14:08:16 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:18.455 14:08:16 -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:18.455 { 00:07:18.455 "nbd_device": "/dev/nbd0", 00:07:18.455 "bdev_name": "Nvme0n1" 00:07:18.455 }, 00:07:18.455 { 00:07:18.455 "nbd_device": "/dev/nbd1", 00:07:18.455 "bdev_name": "Nvme1n1" 00:07:18.455 }, 00:07:18.455 { 00:07:18.455 "nbd_device": "/dev/nbd2", 00:07:18.455 "bdev_name": "Nvme2n1" 00:07:18.455 }, 00:07:18.455 { 00:07:18.455 "nbd_device": "/dev/nbd3", 00:07:18.455 "bdev_name": "Nvme2n2" 00:07:18.455 }, 00:07:18.455 { 00:07:18.455 "nbd_device": "/dev/nbd4", 00:07:18.455 "bdev_name": "Nvme2n3" 00:07:18.455 }, 00:07:18.455 { 00:07:18.455 "nbd_device": "/dev/nbd5", 00:07:18.455 "bdev_name": "Nvme3n1" 00:07:18.455 } 00:07:18.455 ]' 00:07:18.455 14:08:16 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:18.455 14:08:16 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:18.455 14:08:16 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.455 14:08:16 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:18.455 14:08:16 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:18.455 14:08:16 -- bdev/nbd_common.sh@51 -- # local i 00:07:18.455 14:08:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.455 14:08:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:18.455 14:08:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:18.455 14:08:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:18.455 14:08:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:18.455 14:08:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.455 14:08:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.455 14:08:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:18.455 14:08:17 -- bdev/nbd_common.sh@41 -- # break 00:07:18.456 14:08:17 -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.456 14:08:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.456 14:08:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:18.717 14:08:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:18.717 14:08:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:18.717 14:08:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:18.717 14:08:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.717 14:08:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.717 14:08:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:18.717 14:08:17 -- bdev/nbd_common.sh@41 -- # break 00:07:18.717 14:08:17 -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.717 14:08:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.717 14:08:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:18.977 14:08:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:18.977 14:08:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:18.977 14:08:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:18.977 14:08:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.977 14:08:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.977 14:08:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:18.977 14:08:17 -- bdev/nbd_common.sh@41 -- # break 00:07:18.977 14:08:17 -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.977 14:08:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.977 14:08:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:19.239 14:08:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:19.239 14:08:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:19.239 14:08:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:19.239 14:08:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.239 14:08:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.239 14:08:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:19.239 14:08:17 -- bdev/nbd_common.sh@41 -- # break 00:07:19.239 14:08:17 -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.239 14:08:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.239 14:08:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:19.498 14:08:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:19.498 14:08:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:19.498 14:08:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:19.498 14:08:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.498 14:08:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.498 14:08:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:19.498 14:08:17 -- bdev/nbd_common.sh@41 -- # break 00:07:19.498 14:08:17 -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.498 14:08:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.498 14:08:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:19.498 14:08:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:19.498 14:08:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:19.498 14:08:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:19.498 14:08:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.498 14:08:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.498 14:08:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:19.498 14:08:18 -- bdev/nbd_common.sh@41 -- # break 00:07:19.498 14:08:18 -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.498 14:08:18 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:19.498 14:08:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.498 14:08:18 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:19.756 14:08:18 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:19.756 14:08:18 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:19.756 14:08:18 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:19.756 14:08:18 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:19.756 14:08:18 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:19.756 14:08:18 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:19.756 14:08:18 -- bdev/nbd_common.sh@65 -- # true 00:07:19.756 14:08:18 -- bdev/nbd_common.sh@65 -- # count=0 00:07:19.756 14:08:18 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:19.756 14:08:18 -- bdev/nbd_common.sh@122 -- # count=0 00:07:19.756 14:08:18 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:19.756 14:08:18 -- bdev/nbd_common.sh@127 -- # return 0 00:07:19.756 14:08:18 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:19.756 14:08:18 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.756 14:08:18 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:19.756 14:08:18 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:19.756 14:08:18 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:19.756 14:08:18 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:19.756 14:08:18 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:19.756 14:08:18 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.756 14:08:18 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:19.756 14:08:18 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:19.756 14:08:18 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:19.756 14:08:18 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:19.756 14:08:18 -- bdev/nbd_common.sh@12 -- # local i 00:07:19.756 14:08:18 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:19.756 14:08:18 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:19.756 14:08:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:20.015 /dev/nbd0 00:07:20.015 14:08:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:20.015 14:08:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:20.015 14:08:18 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:20.015 14:08:18 -- common/autotest_common.sh@867 -- # local i 00:07:20.015 14:08:18 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:20.015 14:08:18 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:20.015 14:08:18 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:20.015 14:08:18 -- common/autotest_common.sh@871 -- # break 00:07:20.015 14:08:18 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:20.015 14:08:18 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:20.015 14:08:18 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:20.015 1+0 records in 00:07:20.015 1+0 records out 00:07:20.015 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000550569 s, 7.4 MB/s 00:07:20.015 14:08:18 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.015 14:08:18 -- common/autotest_common.sh@884 -- # size=4096 00:07:20.015 14:08:18 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.015 14:08:18 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:20.015 14:08:18 -- common/autotest_common.sh@887 -- # return 0 00:07:20.015 14:08:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:20.015 14:08:18 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:20.015 14:08:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:20.272 /dev/nbd1 00:07:20.272 14:08:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:20.272 14:08:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:20.273 14:08:18 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:20.273 14:08:18 -- common/autotest_common.sh@867 -- # local i 00:07:20.273 14:08:18 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:20.273 14:08:18 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:20.273 14:08:18 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:20.273 14:08:18 -- common/autotest_common.sh@871 -- # break 00:07:20.273 14:08:18 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:20.273 14:08:18 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:20.273 14:08:18 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:20.273 1+0 records in 00:07:20.273 1+0 records out 00:07:20.273 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466381 s, 8.8 MB/s 00:07:20.273 14:08:18 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.273 14:08:18 -- common/autotest_common.sh@884 -- # size=4096 00:07:20.273 14:08:18 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.273 14:08:18 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:20.273 14:08:18 -- common/autotest_common.sh@887 -- # return 0 00:07:20.273 14:08:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:20.273 14:08:18 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:20.273 14:08:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:20.531 /dev/nbd10 00:07:20.531 14:08:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:20.531 14:08:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:20.531 14:08:18 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:07:20.531 14:08:18 -- common/autotest_common.sh@867 -- # local i 00:07:20.531 14:08:18 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:20.531 14:08:18 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:20.531 14:08:18 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:07:20.531 14:08:18 -- common/autotest_common.sh@871 -- # break 00:07:20.531 14:08:18 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:20.531 14:08:18 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:20.531 14:08:18 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:20.531 1+0 records in 00:07:20.531 1+0 records out 00:07:20.531 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000641765 s, 6.4 MB/s 00:07:20.531 14:08:18 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.531 14:08:18 -- common/autotest_common.sh@884 -- # size=4096 00:07:20.531 14:08:18 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.531 14:08:18 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:20.531 14:08:18 -- common/autotest_common.sh@887 -- # return 0 00:07:20.531 14:08:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:20.531 14:08:18 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:20.531 14:08:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:20.531 /dev/nbd11 00:07:20.789 14:08:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:20.789 14:08:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:20.789 14:08:19 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:07:20.789 14:08:19 -- common/autotest_common.sh@867 -- # local i 00:07:20.789 14:08:19 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:20.789 14:08:19 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:20.789 14:08:19 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:07:20.789 14:08:19 -- common/autotest_common.sh@871 -- # break 00:07:20.789 14:08:19 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:20.789 14:08:19 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:20.789 14:08:19 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:20.789 1+0 records in 00:07:20.789 1+0 records out 00:07:20.789 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385613 s, 10.6 MB/s 00:07:20.789 14:08:19 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.789 14:08:19 -- common/autotest_common.sh@884 -- # size=4096 00:07:20.789 14:08:19 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.789 14:08:19 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:20.789 14:08:19 -- common/autotest_common.sh@887 -- # return 0 00:07:20.789 14:08:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:20.789 14:08:19 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:20.789 14:08:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:20.789 /dev/nbd12 00:07:20.789 14:08:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:20.789 14:08:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:20.789 14:08:19 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:07:20.789 14:08:19 -- common/autotest_common.sh@867 -- # local i 00:07:20.789 14:08:19 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:20.789 14:08:19 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:20.789 14:08:19 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:07:20.789 14:08:19 -- common/autotest_common.sh@871 -- # break 00:07:20.789 14:08:19 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:20.789 14:08:19 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:20.789 14:08:19 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:20.789 1+0 records in 00:07:20.789 1+0 records out 00:07:20.789 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00130337 s, 3.1 MB/s 00:07:20.789 14:08:19 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.789 14:08:19 -- common/autotest_common.sh@884 -- # size=4096 00:07:20.789 14:08:19 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.789 14:08:19 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:20.789 14:08:19 -- common/autotest_common.sh@887 -- # return 0 00:07:20.789 14:08:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:20.789 14:08:19 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:20.789 14:08:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:21.046 /dev/nbd13 00:07:21.046 14:08:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:21.046 14:08:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:21.046 14:08:19 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:07:21.046 14:08:19 -- common/autotest_common.sh@867 -- # local i 00:07:21.046 14:08:19 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:21.046 14:08:19 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:21.046 14:08:19 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:07:21.046 14:08:19 -- common/autotest_common.sh@871 -- # break 00:07:21.046 14:08:19 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:21.046 14:08:19 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:21.046 14:08:19 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:21.046 1+0 records in 00:07:21.046 1+0 records out 00:07:21.046 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000943561 s, 4.3 MB/s 00:07:21.046 14:08:19 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.046 14:08:19 -- common/autotest_common.sh@884 -- # size=4096 00:07:21.046 14:08:19 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.046 14:08:19 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:21.046 14:08:19 -- common/autotest_common.sh@887 -- # return 0 00:07:21.046 14:08:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:21.046 14:08:19 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:21.046 14:08:19 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:21.046 14:08:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.047 14:08:19 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:21.305 14:08:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:21.305 { 00:07:21.305 "nbd_device": "/dev/nbd0", 00:07:21.305 "bdev_name": "Nvme0n1" 00:07:21.305 }, 00:07:21.305 { 00:07:21.305 "nbd_device": "/dev/nbd1", 00:07:21.305 "bdev_name": "Nvme1n1" 00:07:21.305 }, 00:07:21.305 { 00:07:21.305 "nbd_device": "/dev/nbd10", 00:07:21.305 "bdev_name": "Nvme2n1" 00:07:21.305 }, 00:07:21.305 { 00:07:21.305 "nbd_device": "/dev/nbd11", 00:07:21.305 "bdev_name": "Nvme2n2" 00:07:21.305 }, 00:07:21.305 { 00:07:21.305 "nbd_device": "/dev/nbd12", 00:07:21.305 "bdev_name": "Nvme2n3" 00:07:21.305 }, 00:07:21.305 { 00:07:21.305 "nbd_device": "/dev/nbd13", 00:07:21.305 "bdev_name": "Nvme3n1" 00:07:21.305 } 00:07:21.305 ]' 00:07:21.305 14:08:19 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:21.305 { 00:07:21.305 "nbd_device": "/dev/nbd0", 00:07:21.305 "bdev_name": "Nvme0n1" 00:07:21.305 }, 00:07:21.305 { 00:07:21.305 "nbd_device": "/dev/nbd1", 00:07:21.305 "bdev_name": "Nvme1n1" 00:07:21.305 }, 00:07:21.305 { 00:07:21.305 "nbd_device": "/dev/nbd10", 00:07:21.305 "bdev_name": "Nvme2n1" 00:07:21.305 }, 00:07:21.305 { 00:07:21.305 "nbd_device": "/dev/nbd11", 00:07:21.305 "bdev_name": "Nvme2n2" 00:07:21.305 }, 00:07:21.305 { 00:07:21.305 "nbd_device": "/dev/nbd12", 00:07:21.305 "bdev_name": "Nvme2n3" 00:07:21.305 }, 00:07:21.305 { 00:07:21.305 "nbd_device": "/dev/nbd13", 00:07:21.305 "bdev_name": "Nvme3n1" 00:07:21.305 } 00:07:21.305 ]' 00:07:21.305 14:08:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:21.305 14:08:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:21.305 /dev/nbd1 00:07:21.305 /dev/nbd10 00:07:21.305 /dev/nbd11 00:07:21.305 /dev/nbd12 00:07:21.305 /dev/nbd13' 00:07:21.305 14:08:19 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:21.305 /dev/nbd1 00:07:21.305 /dev/nbd10 00:07:21.305 /dev/nbd11 00:07:21.305 /dev/nbd12 00:07:21.305 /dev/nbd13' 00:07:21.305 14:08:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:21.305 14:08:19 -- bdev/nbd_common.sh@65 -- # count=6 00:07:21.305 14:08:19 -- bdev/nbd_common.sh@66 -- # echo 6 00:07:21.305 14:08:19 -- bdev/nbd_common.sh@95 -- # count=6 00:07:21.305 14:08:19 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:21.305 14:08:19 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:21.305 14:08:19 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:21.305 14:08:19 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:21.305 14:08:19 -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:21.305 14:08:19 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:21.305 14:08:19 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:21.305 14:08:19 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:21.305 256+0 records in 00:07:21.305 256+0 records out 00:07:21.305 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00622549 s, 168 MB/s 00:07:21.305 14:08:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:21.305 14:08:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:21.564 256+0 records in 00:07:21.564 256+0 records out 00:07:21.564 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.18385 s, 5.7 MB/s 00:07:21.564 14:08:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:21.564 14:08:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:21.564 256+0 records in 00:07:21.564 256+0 records out 00:07:21.564 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.115527 s, 9.1 MB/s 00:07:21.564 14:08:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:21.564 14:08:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:21.822 256+0 records in 00:07:21.822 256+0 records out 00:07:21.822 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0999318 s, 10.5 MB/s 00:07:21.822 14:08:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:21.822 14:08:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:22.080 256+0 records in 00:07:22.080 256+0 records out 00:07:22.080 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.201318 s, 5.2 MB/s 00:07:22.080 14:08:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:22.080 14:08:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:22.080 256+0 records in 00:07:22.080 256+0 records out 00:07:22.080 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0847682 s, 12.4 MB/s 00:07:22.080 14:08:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:22.080 14:08:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:22.341 256+0 records in 00:07:22.341 256+0 records out 00:07:22.341 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.18898 s, 5.5 MB/s 00:07:22.341 14:08:20 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:22.341 14:08:20 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:22.341 14:08:20 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:22.341 14:08:20 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:22.341 14:08:20 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:22.341 14:08:20 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:22.341 14:08:20 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:22.341 14:08:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.341 14:08:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:22.341 14:08:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.341 14:08:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:22.341 14:08:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.341 14:08:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:22.341 14:08:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.341 14:08:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:22.341 14:08:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.341 14:08:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:22.341 14:08:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.341 14:08:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:22.341 14:08:20 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:22.341 14:08:20 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:22.341 14:08:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.341 14:08:20 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:22.341 14:08:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:22.341 14:08:20 -- bdev/nbd_common.sh@51 -- # local i 00:07:22.341 14:08:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.341 14:08:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:22.602 14:08:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:22.602 14:08:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:22.602 14:08:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:22.602 14:08:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.602 14:08:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.602 14:08:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:22.602 14:08:20 -- bdev/nbd_common.sh@41 -- # break 00:07:22.602 14:08:20 -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.602 14:08:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.602 14:08:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:22.602 14:08:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:22.603 14:08:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:22.603 14:08:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:22.603 14:08:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.603 14:08:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.603 14:08:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:22.603 14:08:21 -- bdev/nbd_common.sh@41 -- # break 00:07:22.603 14:08:21 -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.603 14:08:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.603 14:08:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:22.864 14:08:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:22.864 14:08:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:22.864 14:08:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:22.864 14:08:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.864 14:08:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.864 14:08:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:22.864 14:08:21 -- bdev/nbd_common.sh@41 -- # break 00:07:22.864 14:08:21 -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.864 14:08:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.864 14:08:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:23.126 14:08:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:23.126 14:08:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:23.126 14:08:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:23.126 14:08:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.126 14:08:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.126 14:08:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:23.126 14:08:21 -- bdev/nbd_common.sh@41 -- # break 00:07:23.126 14:08:21 -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.126 14:08:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.126 14:08:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:23.386 14:08:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:23.386 14:08:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:23.386 14:08:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:23.386 14:08:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.386 14:08:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.386 14:08:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:23.386 14:08:21 -- bdev/nbd_common.sh@41 -- # break 00:07:23.386 14:08:21 -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.386 14:08:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.386 14:08:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:23.645 14:08:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:23.645 14:08:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:23.645 14:08:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:23.645 14:08:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.645 14:08:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.645 14:08:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:23.645 14:08:21 -- bdev/nbd_common.sh@41 -- # break 00:07:23.645 14:08:21 -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.645 14:08:21 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:23.645 14:08:21 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.645 14:08:21 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:23.645 14:08:22 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:23.645 14:08:22 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:23.645 14:08:22 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:23.906 14:08:22 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:23.906 14:08:22 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:23.906 14:08:22 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:23.906 14:08:22 -- bdev/nbd_common.sh@65 -- # true 00:07:23.906 14:08:22 -- bdev/nbd_common.sh@65 -- # count=0 00:07:23.906 14:08:22 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:23.906 14:08:22 -- bdev/nbd_common.sh@104 -- # count=0 00:07:23.906 14:08:22 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:23.906 14:08:22 -- bdev/nbd_common.sh@109 -- # return 0 00:07:23.906 14:08:22 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:23.906 14:08:22 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.906 14:08:22 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:23.906 14:08:22 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:07:23.906 14:08:22 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:07:23.906 14:08:22 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:23.906 malloc_lvol_verify 00:07:23.906 14:08:22 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:24.165 889c3b47-5ddd-412d-b9bb-00e922178832 00:07:24.165 14:08:22 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:24.426 8ce5238e-f1e1-4cb0-8e4e-d9d89614b3ae 00:07:24.426 14:08:22 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:24.688 /dev/nbd0 00:07:24.688 14:08:23 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:07:24.688 mke2fs 1.47.0 (5-Feb-2023) 00:07:24.688 Discarding device blocks: 0/4096 done 00:07:24.688 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:24.688 00:07:24.688 Allocating group tables: 0/1 done 00:07:24.688 Writing inode tables: 0/1 done 00:07:24.688 Creating journal (1024 blocks): done 00:07:24.688 Writing superblocks and filesystem accounting information: 0/1 done 00:07:24.688 00:07:24.688 14:08:23 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:07:24.688 14:08:23 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:24.688 14:08:23 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.688 14:08:23 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:24.688 14:08:23 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:24.688 14:08:23 -- bdev/nbd_common.sh@51 -- # local i 00:07:24.688 14:08:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:24.688 14:08:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:24.950 14:08:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:24.950 14:08:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:24.950 14:08:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:24.950 14:08:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:24.950 14:08:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:24.950 14:08:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:24.950 14:08:23 -- bdev/nbd_common.sh@41 -- # break 00:07:24.950 14:08:23 -- bdev/nbd_common.sh@45 -- # return 0 00:07:24.950 14:08:23 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:07:24.950 14:08:23 -- bdev/nbd_common.sh@147 -- # return 0 00:07:24.950 14:08:23 -- bdev/blockdev.sh@324 -- # killprocess 60415 00:07:24.950 14:08:23 -- common/autotest_common.sh@936 -- # '[' -z 60415 ']' 00:07:24.950 14:08:23 -- common/autotest_common.sh@940 -- # kill -0 60415 00:07:24.950 14:08:23 -- common/autotest_common.sh@941 -- # uname 00:07:24.950 14:08:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:24.950 14:08:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60415 00:07:24.950 killing process with pid 60415 00:07:24.950 14:08:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:24.950 14:08:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:24.950 14:08:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60415' 00:07:24.950 14:08:23 -- common/autotest_common.sh@955 -- # kill 60415 00:07:24.950 14:08:23 -- common/autotest_common.sh@960 -- # wait 60415 00:07:25.895 ************************************ 00:07:25.895 END TEST bdev_nbd 00:07:25.895 ************************************ 00:07:25.895 14:08:24 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:07:25.895 00:07:25.895 real 0m10.083s 00:07:25.895 user 0m13.930s 00:07:25.895 sys 0m3.139s 00:07:25.895 14:08:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:25.895 14:08:24 -- common/autotest_common.sh@10 -- # set +x 00:07:25.895 skipping fio tests on NVMe due to multi-ns failures. 00:07:25.895 14:08:24 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:07:25.895 14:08:24 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:07:25.895 14:08:24 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:25.895 14:08:24 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:25.895 14:08:24 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:25.895 14:08:24 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:07:25.895 14:08:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.895 14:08:24 -- common/autotest_common.sh@10 -- # set +x 00:07:25.895 ************************************ 00:07:25.895 START TEST bdev_verify 00:07:25.895 ************************************ 00:07:25.895 14:08:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:26.160 [2024-11-19 14:08:24.519985] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:26.160 [2024-11-19 14:08:24.520122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60790 ] 00:07:26.160 [2024-11-19 14:08:24.672925] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:26.425 [2024-11-19 14:08:24.865151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.425 [2024-11-19 14:08:24.865401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.999 Running I/O for 5 seconds... 00:07:32.270 00:07:32.270 Latency(us) 00:07:32.270 [2024-11-19T14:08:30.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:32.270 [2024-11-19T14:08:30.832Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:32.270 Verification LBA range: start 0x0 length 0xbd0bd 00:07:32.270 Nvme0n1 : 5.04 2432.43 9.50 0.00 0.00 52475.67 9830.40 73803.62 00:07:32.270 [2024-11-19T14:08:30.832Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:32.270 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:32.270 Nvme0n1 : 5.05 2431.48 9.50 0.00 0.00 52451.00 13510.50 72190.42 00:07:32.270 [2024-11-19T14:08:30.832Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:32.270 Verification LBA range: start 0x0 length 0xa0000 00:07:32.270 Nvme1n1 : 5.05 2431.20 9.50 0.00 0.00 52449.88 10384.94 71383.83 00:07:32.270 [2024-11-19T14:08:30.832Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:32.270 Verification LBA range: start 0xa0000 length 0xa0000 00:07:32.270 Nvme1n1 : 5.06 2432.84 9.50 0.00 0.00 52409.99 6503.19 71383.83 00:07:32.270 [2024-11-19T14:08:30.832Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:32.270 Verification LBA range: start 0x0 length 0x80000 00:07:32.270 Nvme2n1 : 5.05 2435.30 9.51 0.00 0.00 52181.05 3755.72 56461.78 00:07:32.270 [2024-11-19T14:08:30.832Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:32.270 Verification LBA range: start 0x80000 length 0x80000 00:07:32.270 Nvme2n1 : 5.06 2431.32 9.50 0.00 0.00 52256.36 7813.91 61301.37 00:07:32.270 [2024-11-19T14:08:30.832Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:32.270 Verification LBA range: start 0x0 length 0x80000 00:07:32.270 Nvme2n2 : 5.06 2440.55 9.53 0.00 0.00 52023.45 2722.26 55655.19 00:07:32.270 [2024-11-19T14:08:30.832Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:32.270 Verification LBA range: start 0x80000 length 0x80000 00:07:32.270 Nvme2n2 : 5.07 2436.55 9.52 0.00 0.00 52128.01 2709.66 54848.59 00:07:32.270 [2024-11-19T14:08:30.832Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:32.270 Verification LBA range: start 0x0 length 0x80000 00:07:32.270 Nvme2n3 : 5.06 2439.40 9.53 0.00 0.00 51985.50 4285.05 54041.99 00:07:32.270 [2024-11-19T14:08:30.832Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:32.270 Verification LBA range: start 0x80000 length 0x80000 00:07:32.270 Nvme2n3 : 5.07 2436.00 9.52 0.00 0.00 52088.62 2848.30 54445.29 00:07:32.270 [2024-11-19T14:08:30.832Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:32.270 Verification LBA range: start 0x0 length 0x20000 00:07:32.270 Nvme3n1 : 5.07 2445.04 9.55 0.00 0.00 51825.83 4234.63 56461.78 00:07:32.270 [2024-11-19T14:08:30.832Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:32.270 Verification LBA range: start 0x20000 length 0x20000 00:07:32.270 Nvme3n1 : 5.07 2433.99 9.51 0.00 0.00 52035.70 6452.78 54041.99 00:07:32.270 [2024-11-19T14:08:30.832Z] =================================================================================================================== 00:07:32.270 [2024-11-19T14:08:30.832Z] Total : 29226.11 114.16 0.00 0.00 52191.98 2709.66 73803.62 00:07:50.463 00:07:50.463 real 0m23.102s 00:07:50.463 user 0m28.971s 00:07:50.463 sys 0m0.587s 00:07:50.463 14:08:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:50.463 ************************************ 00:07:50.463 14:08:47 -- common/autotest_common.sh@10 -- # set +x 00:07:50.463 END TEST bdev_verify 00:07:50.463 ************************************ 00:07:50.463 14:08:47 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:50.463 14:08:47 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:07:50.463 14:08:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:50.463 14:08:47 -- common/autotest_common.sh@10 -- # set +x 00:07:50.463 ************************************ 00:07:50.463 START TEST bdev_verify_big_io 00:07:50.463 ************************************ 00:07:50.463 14:08:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:50.463 [2024-11-19 14:08:47.662451] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:50.463 [2024-11-19 14:08:47.662561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60998 ] 00:07:50.463 [2024-11-19 14:08:47.804034] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:50.463 [2024-11-19 14:08:47.985654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.463 [2024-11-19 14:08:47.985875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.463 Running I/O for 5 seconds... 00:07:55.729 00:07:55.729 Latency(us) 00:07:55.729 [2024-11-19T14:08:54.291Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.729 [2024-11-19T14:08:54.291Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:55.729 Verification LBA range: start 0x0 length 0xbd0b 00:07:55.729 Nvme0n1 : 5.38 283.07 17.69 0.00 0.00 443038.51 48597.46 777559.43 00:07:55.729 [2024-11-19T14:08:54.291Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:55.729 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:55.729 Nvme0n1 : 5.50 253.12 15.82 0.00 0.00 457815.57 4234.63 622692.82 00:07:55.729 [2024-11-19T14:08:54.291Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:55.729 Verification LBA range: start 0x0 length 0xa000 00:07:55.729 Nvme1n1 : 5.39 282.98 17.69 0.00 0.00 436579.52 49202.41 716258.07 00:07:55.729 [2024-11-19T14:08:54.291Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:55.729 Verification LBA range: start 0xa000 length 0xa000 00:07:55.729 Nvme1n1 : 5.42 216.65 13.54 0.00 0.00 579959.75 36296.86 845313.58 00:07:55.729 [2024-11-19T14:08:54.291Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:55.729 Verification LBA range: start 0x0 length 0x8000 00:07:55.729 Nvme2n1 : 5.42 289.01 18.06 0.00 0.00 422345.43 33272.12 658183.09 00:07:55.729 [2024-11-19T14:08:54.291Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:55.729 Verification LBA range: start 0x8000 length 0x8000 00:07:55.729 Nvme2n1 : 5.43 216.57 13.54 0.00 0.00 569856.18 37103.46 774333.05 00:07:55.729 [2024-11-19T14:08:54.291Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:55.729 Verification LBA range: start 0x0 length 0x8000 00:07:55.729 Nvme2n2 : 5.43 288.84 18.05 0.00 0.00 415556.66 35490.26 596881.72 00:07:55.729 [2024-11-19T14:08:54.291Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:55.729 Verification LBA range: start 0x8000 length 0x8000 00:07:55.729 Nvme2n2 : 5.46 222.35 13.90 0.00 0.00 547121.54 30449.03 700126.13 00:07:55.729 [2024-11-19T14:08:54.291Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:55.729 Verification LBA range: start 0x0 length 0x8000 00:07:55.729 Nvme2n3 : 5.46 294.91 18.43 0.00 0.00 400777.97 30045.74 529127.58 00:07:55.729 [2024-11-19T14:08:54.291Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:55.729 Verification LBA range: start 0x8000 length 0x8000 00:07:55.729 Nvme2n3 : 5.46 222.26 13.89 0.00 0.00 537471.77 31658.93 625919.21 00:07:55.729 [2024-11-19T14:08:54.291Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:55.729 Verification LBA range: start 0x0 length 0x2000 00:07:55.729 Nvme3n1 : 5.47 312.13 19.51 0.00 0.00 374938.78 2823.09 467826.22 00:07:55.729 [2024-11-19T14:08:54.291Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:55.729 Verification LBA range: start 0x2000 length 0x2000 00:07:55.729 Nvme3n1 : 5.48 228.85 14.30 0.00 0.00 513979.31 10838.65 561391.46 00:07:55.729 [2024-11-19T14:08:54.291Z] =================================================================================================================== 00:07:55.729 [2024-11-19T14:08:54.291Z] Total : 3110.74 194.42 0.00 0.00 466129.13 2823.09 845313.58 00:07:57.736 00:07:57.736 real 0m8.468s 00:07:57.736 user 0m15.474s 00:07:57.736 sys 0m0.266s 00:07:57.736 14:08:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:57.736 ************************************ 00:07:57.736 END TEST bdev_verify_big_io 00:07:57.736 ************************************ 00:07:57.736 14:08:56 -- common/autotest_common.sh@10 -- # set +x 00:07:57.736 14:08:56 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:57.736 14:08:56 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:57.736 14:08:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:57.736 14:08:56 -- common/autotest_common.sh@10 -- # set +x 00:07:57.736 ************************************ 00:07:57.736 START TEST bdev_write_zeroes 00:07:57.736 ************************************ 00:07:57.736 14:08:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:57.736 [2024-11-19 14:08:56.198160] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:57.736 [2024-11-19 14:08:56.198266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61114 ] 00:07:57.996 [2024-11-19 14:08:56.343494] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.257 [2024-11-19 14:08:56.572373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.831 Running I/O for 1 seconds... 00:07:59.770 00:07:59.770 Latency(us) 00:07:59.770 [2024-11-19T14:08:58.332Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:59.770 [2024-11-19T14:08:58.332Z] Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:59.770 Nvme0n1 : 1.01 10842.90 42.36 0.00 0.00 11771.82 4713.55 27021.00 00:07:59.770 [2024-11-19T14:08:58.332Z] Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:59.770 Nvme1n1 : 1.01 10830.29 42.31 0.00 0.00 11771.01 8570.09 22483.89 00:07:59.770 [2024-11-19T14:08:58.332Z] Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:59.770 Nvme2n1 : 1.01 10862.93 42.43 0.00 0.00 11717.31 6553.60 21677.29 00:07:59.770 [2024-11-19T14:08:58.332Z] Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:59.770 Nvme2n2 : 1.01 10850.56 42.38 0.00 0.00 11715.08 6906.49 22383.06 00:07:59.770 [2024-11-19T14:08:58.332Z] Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:59.770 Nvme2n3 : 1.02 10838.25 42.34 0.00 0.00 11705.30 7259.37 22483.89 00:07:59.770 [2024-11-19T14:08:58.332Z] Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:59.770 Nvme3n1 : 1.02 10876.28 42.49 0.00 0.00 11638.43 4965.61 20669.05 00:07:59.770 [2024-11-19T14:08:58.332Z] =================================================================================================================== 00:07:59.770 [2024-11-19T14:08:58.332Z] Total : 65101.21 254.30 0.00 0.00 11719.64 4713.55 27021.00 00:08:00.715 00:08:00.715 real 0m2.863s 00:08:00.715 user 0m2.517s 00:08:00.715 sys 0m0.228s 00:08:00.715 14:08:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:00.715 ************************************ 00:08:00.715 END TEST bdev_write_zeroes 00:08:00.715 ************************************ 00:08:00.715 14:08:59 -- common/autotest_common.sh@10 -- # set +x 00:08:00.715 14:08:59 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:00.715 14:08:59 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:00.715 14:08:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:00.715 14:08:59 -- common/autotest_common.sh@10 -- # set +x 00:08:00.715 ************************************ 00:08:00.715 START TEST bdev_json_nonenclosed 00:08:00.715 ************************************ 00:08:00.715 14:08:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:00.715 [2024-11-19 14:08:59.128510] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:00.715 [2024-11-19 14:08:59.128639] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61167 ] 00:08:00.976 [2024-11-19 14:08:59.281195] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.976 [2024-11-19 14:08:59.457176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.976 [2024-11-19 14:08:59.457324] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:00.976 [2024-11-19 14:08:59.457347] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:01.237 00:08:01.237 real 0m0.694s 00:08:01.237 user 0m0.466s 00:08:01.237 sys 0m0.123s 00:08:01.237 14:08:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:01.237 14:08:59 -- common/autotest_common.sh@10 -- # set +x 00:08:01.237 ************************************ 00:08:01.237 END TEST bdev_json_nonenclosed 00:08:01.237 ************************************ 00:08:01.498 14:08:59 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:01.498 14:08:59 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:01.498 14:08:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:01.498 14:08:59 -- common/autotest_common.sh@10 -- # set +x 00:08:01.498 ************************************ 00:08:01.498 START TEST bdev_json_nonarray 00:08:01.498 ************************************ 00:08:01.498 14:08:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:01.498 [2024-11-19 14:08:59.899511] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:01.498 [2024-11-19 14:08:59.899657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61192 ] 00:08:01.498 [2024-11-19 14:09:00.051291] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.759 [2024-11-19 14:09:00.287240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.759 [2024-11-19 14:09:00.287458] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:01.759 [2024-11-19 14:09:00.287480] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:02.332 00:08:02.332 real 0m0.776s 00:08:02.332 user 0m0.544s 00:08:02.332 sys 0m0.124s 00:08:02.332 14:09:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:02.332 ************************************ 00:08:02.332 END TEST bdev_json_nonarray 00:08:02.332 ************************************ 00:08:02.332 14:09:00 -- common/autotest_common.sh@10 -- # set +x 00:08:02.332 14:09:00 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:08:02.332 14:09:00 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:08:02.332 14:09:00 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:08:02.332 14:09:00 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:08:02.332 14:09:00 -- bdev/blockdev.sh@809 -- # cleanup 00:08:02.332 14:09:00 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:02.332 14:09:00 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:02.332 14:09:00 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:08:02.332 14:09:00 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:08:02.332 14:09:00 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:08:02.332 14:09:00 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:08:02.332 00:08:02.332 real 0m53.175s 00:08:02.332 user 1m11.438s 00:08:02.332 sys 0m5.605s 00:08:02.332 14:09:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:02.332 14:09:00 -- common/autotest_common.sh@10 -- # set +x 00:08:02.332 ************************************ 00:08:02.332 END TEST blockdev_nvme 00:08:02.332 ************************************ 00:08:02.332 14:09:00 -- spdk/autotest.sh@206 -- # uname -s 00:08:02.332 14:09:00 -- spdk/autotest.sh@206 -- # [[ Linux == Linux ]] 00:08:02.332 14:09:00 -- spdk/autotest.sh@207 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:02.332 14:09:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:02.332 14:09:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:02.332 14:09:00 -- common/autotest_common.sh@10 -- # set +x 00:08:02.332 ************************************ 00:08:02.332 START TEST blockdev_nvme_gpt 00:08:02.332 ************************************ 00:08:02.332 14:09:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:02.332 * Looking for test storage... 00:08:02.332 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:02.332 14:09:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:02.332 14:09:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:02.332 14:09:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:02.332 14:09:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:02.332 14:09:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:02.332 14:09:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:02.332 14:09:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:02.332 14:09:00 -- scripts/common.sh@335 -- # IFS=.-: 00:08:02.332 14:09:00 -- scripts/common.sh@335 -- # read -ra ver1 00:08:02.332 14:09:00 -- scripts/common.sh@336 -- # IFS=.-: 00:08:02.332 14:09:00 -- scripts/common.sh@336 -- # read -ra ver2 00:08:02.332 14:09:00 -- scripts/common.sh@337 -- # local 'op=<' 00:08:02.332 14:09:00 -- scripts/common.sh@339 -- # ver1_l=2 00:08:02.332 14:09:00 -- scripts/common.sh@340 -- # ver2_l=1 00:08:02.332 14:09:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:02.332 14:09:00 -- scripts/common.sh@343 -- # case "$op" in 00:08:02.332 14:09:00 -- scripts/common.sh@344 -- # : 1 00:08:02.332 14:09:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:02.332 14:09:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:02.332 14:09:00 -- scripts/common.sh@364 -- # decimal 1 00:08:02.332 14:09:00 -- scripts/common.sh@352 -- # local d=1 00:08:02.332 14:09:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:02.332 14:09:00 -- scripts/common.sh@354 -- # echo 1 00:08:02.332 14:09:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:02.332 14:09:00 -- scripts/common.sh@365 -- # decimal 2 00:08:02.332 14:09:00 -- scripts/common.sh@352 -- # local d=2 00:08:02.332 14:09:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:02.332 14:09:00 -- scripts/common.sh@354 -- # echo 2 00:08:02.332 14:09:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:02.332 14:09:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:02.332 14:09:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:02.332 14:09:00 -- scripts/common.sh@367 -- # return 0 00:08:02.332 14:09:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:02.332 14:09:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:02.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.332 --rc genhtml_branch_coverage=1 00:08:02.332 --rc genhtml_function_coverage=1 00:08:02.332 --rc genhtml_legend=1 00:08:02.332 --rc geninfo_all_blocks=1 00:08:02.332 --rc geninfo_unexecuted_blocks=1 00:08:02.332 00:08:02.332 ' 00:08:02.332 14:09:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:02.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.333 --rc genhtml_branch_coverage=1 00:08:02.333 --rc genhtml_function_coverage=1 00:08:02.333 --rc genhtml_legend=1 00:08:02.333 --rc geninfo_all_blocks=1 00:08:02.333 --rc geninfo_unexecuted_blocks=1 00:08:02.333 00:08:02.333 ' 00:08:02.333 14:09:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:02.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.333 --rc genhtml_branch_coverage=1 00:08:02.333 --rc genhtml_function_coverage=1 00:08:02.333 --rc genhtml_legend=1 00:08:02.333 --rc geninfo_all_blocks=1 00:08:02.333 --rc geninfo_unexecuted_blocks=1 00:08:02.333 00:08:02.333 ' 00:08:02.333 14:09:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:02.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.333 --rc genhtml_branch_coverage=1 00:08:02.333 --rc genhtml_function_coverage=1 00:08:02.333 --rc genhtml_legend=1 00:08:02.333 --rc geninfo_all_blocks=1 00:08:02.333 --rc geninfo_unexecuted_blocks=1 00:08:02.333 00:08:02.333 ' 00:08:02.333 14:09:00 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:02.333 14:09:00 -- bdev/nbd_common.sh@6 -- # set -e 00:08:02.333 14:09:00 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:02.333 14:09:00 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:02.333 14:09:00 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:02.333 14:09:00 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:02.333 14:09:00 -- bdev/blockdev.sh@18 -- # : 00:08:02.333 14:09:00 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:08:02.333 14:09:00 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:08:02.333 14:09:00 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:08:02.333 14:09:00 -- bdev/blockdev.sh@672 -- # uname -s 00:08:02.333 14:09:00 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:08:02.333 14:09:00 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:08:02.333 14:09:00 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:08:02.333 14:09:00 -- bdev/blockdev.sh@681 -- # crypto_device= 00:08:02.333 14:09:00 -- bdev/blockdev.sh@682 -- # dek= 00:08:02.333 14:09:00 -- bdev/blockdev.sh@683 -- # env_ctx= 00:08:02.333 14:09:00 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:08:02.333 14:09:00 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:08:02.333 14:09:00 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:08:02.333 14:09:00 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:08:02.333 14:09:00 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:08:02.333 14:09:00 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=61275 00:08:02.333 14:09:00 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:02.333 14:09:00 -- bdev/blockdev.sh@47 -- # waitforlisten 61275 00:08:02.333 14:09:00 -- common/autotest_common.sh@829 -- # '[' -z 61275 ']' 00:08:02.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.333 14:09:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.333 14:09:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:02.333 14:09:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.333 14:09:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:02.333 14:09:00 -- common/autotest_common.sh@10 -- # set +x 00:08:02.333 14:09:00 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:02.595 [2024-11-19 14:09:00.975120] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:02.595 [2024-11-19 14:09:00.975263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61275 ] 00:08:02.595 [2024-11-19 14:09:01.132827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.856 [2024-11-19 14:09:01.345064] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:02.856 [2024-11-19 14:09:01.345270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.243 14:09:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:04.243 14:09:02 -- common/autotest_common.sh@862 -- # return 0 00:08:04.243 14:09:02 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:08:04.243 14:09:02 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:08:04.243 14:09:02 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:04.505 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:04.505 Waiting for block devices as requested 00:08:04.505 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:08:04.768 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:08:04.768 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:08:05.030 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:08:10.322 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:08:10.322 14:09:08 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:08:10.322 14:09:08 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:08:10.322 14:09:08 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:08:10.322 14:09:08 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:08:10.322 14:09:08 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:10.322 14:09:08 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:08:10.322 14:09:08 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:08:10.322 14:09:08 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:08:10.322 14:09:08 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:10.322 14:09:08 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:10.322 14:09:08 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:08:10.322 14:09:08 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:08:10.322 14:09:08 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:10.322 14:09:08 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:10.322 14:09:08 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:10.322 14:09:08 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:08:10.322 14:09:08 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:08:10.322 14:09:08 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:10.322 14:09:08 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:10.322 14:09:08 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:10.322 14:09:08 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:08:10.322 14:09:08 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:08:10.322 14:09:08 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:08:10.322 14:09:08 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:10.322 14:09:08 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:10.322 14:09:08 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:08:10.322 14:09:08 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:08:10.322 14:09:08 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:08:10.322 14:09:08 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:10.322 14:09:08 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:10.322 14:09:08 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:08:10.322 14:09:08 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:08:10.322 14:09:08 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:10.322 14:09:08 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:10.322 14:09:08 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:10.322 14:09:08 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:08:10.322 14:09:08 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:08:10.322 14:09:08 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:10.322 14:09:08 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:10.322 14:09:08 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:07.0/nvme/nvme3/nvme3n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n2' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n3' '/sys/bus/pci/drivers/nvme/0000:00:09.0/nvme/nvme0/nvme0c0n1') 00:08:10.322 14:09:08 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:08:10.322 14:09:08 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:08:10.322 14:09:08 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:08:10.322 14:09:08 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:08:10.322 14:09:08 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme2n1 00:08:10.322 14:09:08 -- bdev/blockdev.sh@111 -- # parted /dev/nvme2n1 -ms print 00:08:10.322 14:09:08 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme2n1: unrecognised disk label 00:08:10.322 BYT; 00:08:10.322 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:08:10.322 14:09:08 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme2n1: unrecognised disk label 00:08:10.322 BYT; 00:08:10.322 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\2\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:08:10.322 14:09:08 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme2n1 00:08:10.322 14:09:08 -- bdev/blockdev.sh@114 -- # break 00:08:10.322 14:09:08 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme2n1 ]] 00:08:10.322 14:09:08 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:08:10.322 14:09:08 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:10.322 14:09:08 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme2n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:08:10.322 14:09:08 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:08:10.322 14:09:08 -- scripts/common.sh@410 -- # local spdk_guid 00:08:10.322 14:09:08 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:10.322 14:09:08 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:10.322 14:09:08 -- scripts/common.sh@415 -- # IFS='()' 00:08:10.322 14:09:08 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:08:10.322 14:09:08 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:10.322 14:09:08 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:08:10.322 14:09:08 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:10.322 14:09:08 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:10.322 14:09:08 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:10.322 14:09:08 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:08:10.322 14:09:08 -- scripts/common.sh@422 -- # local spdk_guid 00:08:10.322 14:09:08 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:10.322 14:09:08 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:10.322 14:09:08 -- scripts/common.sh@427 -- # IFS='()' 00:08:10.322 14:09:08 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:08:10.322 14:09:08 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:10.322 14:09:08 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:08:10.322 14:09:08 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:10.322 14:09:08 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:10.322 14:09:08 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:10.322 14:09:08 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme2n1 00:08:11.263 The operation has completed successfully. 00:08:11.263 14:09:09 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme2n1 00:08:12.200 The operation has completed successfully. 00:08:12.200 14:09:10 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:13.139 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:13.139 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:08:13.139 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:08:13.139 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:08:13.139 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:08:13.400 14:09:11 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:08:13.400 14:09:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.400 14:09:11 -- common/autotest_common.sh@10 -- # set +x 00:08:13.400 [] 00:08:13.400 14:09:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.400 14:09:11 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:08:13.400 14:09:11 -- bdev/blockdev.sh@79 -- # local json 00:08:13.400 14:09:11 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:08:13.400 14:09:11 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:13.400 14:09:11 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:08:13.400 14:09:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.400 14:09:11 -- common/autotest_common.sh@10 -- # set +x 00:08:13.661 14:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.661 14:09:12 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:08:13.661 14:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.661 14:09:12 -- common/autotest_common.sh@10 -- # set +x 00:08:13.661 14:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.661 14:09:12 -- bdev/blockdev.sh@738 -- # cat 00:08:13.661 14:09:12 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:08:13.661 14:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.661 14:09:12 -- common/autotest_common.sh@10 -- # set +x 00:08:13.661 14:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.661 14:09:12 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:08:13.661 14:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.661 14:09:12 -- common/autotest_common.sh@10 -- # set +x 00:08:13.661 14:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.661 14:09:12 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:13.661 14:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.661 14:09:12 -- common/autotest_common.sh@10 -- # set +x 00:08:13.661 14:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.661 14:09:12 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:08:13.661 14:09:12 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:08:13.661 14:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.661 14:09:12 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:08:13.661 14:09:12 -- common/autotest_common.sh@10 -- # set +x 00:08:13.661 14:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.661 14:09:12 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:08:13.661 14:09:12 -- bdev/blockdev.sh@747 -- # jq -r .name 00:08:13.662 14:09:12 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "d3f7f435-123d-4d3a-81e3-fdb6d3e9dd7a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d3f7f435-123d-4d3a-81e3-fdb6d3e9dd7a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "59a4ffa6-99b3-48bd-9aad-0ab8fde8117c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "59a4ffa6-99b3-48bd-9aad-0ab8fde8117c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "88418279-8de0-49f9-9a15-054fd9548263"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "88418279-8de0-49f9-9a15-054fd9548263",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "1cd1230d-e1c4-4c27-b7bf-f057eb939869"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1cd1230d-e1c4-4c27-b7bf-f057eb939869",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "93eccfe0-1b0b-48e2-82ad-ea134f78f818"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "93eccfe0-1b0b-48e2-82ad-ea134f78f818",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:13.662 14:09:12 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:08:13.662 14:09:12 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:08:13.662 14:09:12 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:08:13.662 14:09:12 -- bdev/blockdev.sh@752 -- # killprocess 61275 00:08:13.662 14:09:12 -- common/autotest_common.sh@936 -- # '[' -z 61275 ']' 00:08:13.662 14:09:12 -- common/autotest_common.sh@940 -- # kill -0 61275 00:08:13.662 14:09:12 -- common/autotest_common.sh@941 -- # uname 00:08:13.662 14:09:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:13.662 14:09:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61275 00:08:13.662 14:09:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:13.662 14:09:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:13.662 killing process with pid 61275 00:08:13.662 14:09:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61275' 00:08:13.662 14:09:12 -- common/autotest_common.sh@955 -- # kill 61275 00:08:13.662 14:09:12 -- common/autotest_common.sh@960 -- # wait 61275 00:08:15.578 14:09:13 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:15.578 14:09:13 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:08:15.578 14:09:13 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:15.578 14:09:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:15.578 14:09:13 -- common/autotest_common.sh@10 -- # set +x 00:08:15.578 ************************************ 00:08:15.578 START TEST bdev_hello_world 00:08:15.578 ************************************ 00:08:15.578 14:09:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:08:15.578 [2024-11-19 14:09:14.071751] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:15.578 [2024-11-19 14:09:14.071939] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61935 ] 00:08:15.839 [2024-11-19 14:09:14.219967] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.101 [2024-11-19 14:09:14.499415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.674 [2024-11-19 14:09:15.114074] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:16.674 [2024-11-19 14:09:15.114160] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:08:16.674 [2024-11-19 14:09:15.114187] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:16.674 [2024-11-19 14:09:15.117180] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:16.674 [2024-11-19 14:09:15.117963] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:16.674 [2024-11-19 14:09:15.118011] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:16.674 [2024-11-19 14:09:15.118172] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:16.674 00:08:16.674 [2024-11-19 14:09:15.118198] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:17.615 00:08:17.615 real 0m1.980s 00:08:17.615 user 0m1.580s 00:08:17.615 sys 0m0.287s 00:08:17.615 14:09:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:17.615 ************************************ 00:08:17.615 END TEST bdev_hello_world 00:08:17.615 14:09:15 -- common/autotest_common.sh@10 -- # set +x 00:08:17.615 ************************************ 00:08:17.615 14:09:16 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:08:17.615 14:09:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:17.615 14:09:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:17.615 14:09:16 -- common/autotest_common.sh@10 -- # set +x 00:08:17.615 ************************************ 00:08:17.615 START TEST bdev_bounds 00:08:17.615 ************************************ 00:08:17.615 14:09:16 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:08:17.615 14:09:16 -- bdev/blockdev.sh@288 -- # bdevio_pid=61977 00:08:17.615 Process bdevio pid: 61977 00:08:17.615 14:09:16 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:17.615 14:09:16 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 61977' 00:08:17.615 14:09:16 -- bdev/blockdev.sh@291 -- # waitforlisten 61977 00:08:17.615 14:09:16 -- common/autotest_common.sh@829 -- # '[' -z 61977 ']' 00:08:17.615 14:09:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.615 14:09:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:17.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.615 14:09:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.615 14:09:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:17.615 14:09:16 -- common/autotest_common.sh@10 -- # set +x 00:08:17.615 14:09:16 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:17.615 [2024-11-19 14:09:16.099580] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:17.615 [2024-11-19 14:09:16.099690] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61977 ] 00:08:17.874 [2024-11-19 14:09:16.250500] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:17.875 [2024-11-19 14:09:16.426697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.875 [2024-11-19 14:09:16.427109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.875 [2024-11-19 14:09:16.427117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.250 14:09:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:19.250 14:09:17 -- common/autotest_common.sh@862 -- # return 0 00:08:19.250 14:09:17 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:19.250 I/O targets: 00:08:19.250 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:08:19.250 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:08:19.250 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:19.250 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:19.250 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:19.250 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:19.250 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:19.250 00:08:19.250 00:08:19.250 CUnit - A unit testing framework for C - Version 2.1-3 00:08:19.250 http://cunit.sourceforge.net/ 00:08:19.250 00:08:19.250 00:08:19.250 Suite: bdevio tests on: Nvme3n1 00:08:19.250 Test: blockdev write read block ...passed 00:08:19.250 Test: blockdev write zeroes read block ...passed 00:08:19.250 Test: blockdev write zeroes read no split ...passed 00:08:19.250 Test: blockdev write zeroes read split ...passed 00:08:19.250 Test: blockdev write zeroes read split partial ...passed 00:08:19.250 Test: blockdev reset ...[2024-11-19 14:09:17.794903] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:08:19.250 [2024-11-19 14:09:17.798458] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:19.250 passed 00:08:19.250 Test: blockdev write read 8 blocks ...passed 00:08:19.250 Test: blockdev write read size > 128k ...passed 00:08:19.250 Test: blockdev write read invalid size ...passed 00:08:19.250 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:19.250 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:19.250 Test: blockdev write read max offset ...passed 00:08:19.250 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:19.250 Test: blockdev writev readv 8 blocks ...passed 00:08:19.509 Test: blockdev writev readv 30 x 1block ...passed 00:08:19.509 Test: blockdev writev readv block ...passed 00:08:19.509 Test: blockdev writev readv size > 128k ...passed 00:08:19.509 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:19.509 Test: blockdev comparev and writev ...[2024-11-19 14:09:17.816906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x271a0a000 len:0x1000 00:08:19.509 [2024-11-19 14:09:17.816971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:19.509 passed 00:08:19.509 Test: blockdev nvme passthru rw ...passed 00:08:19.509 Test: blockdev nvme passthru vendor specific ...[2024-11-19 14:09:17.819410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:19.509 [2024-11-19 14:09:17.819442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:19.509 passed 00:08:19.509 Test: blockdev nvme admin passthru ...passed 00:08:19.509 Test: blockdev copy ...passed 00:08:19.509 Suite: bdevio tests on: Nvme2n3 00:08:19.509 Test: blockdev write read block ...passed 00:08:19.509 Test: blockdev write zeroes read block ...passed 00:08:19.509 Test: blockdev write zeroes read no split ...passed 00:08:19.509 Test: blockdev write zeroes read split ...passed 00:08:19.509 Test: blockdev write zeroes read split partial ...passed 00:08:19.509 Test: blockdev reset ...[2024-11-19 14:09:17.902158] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:19.509 [2024-11-19 14:09:17.905925] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:19.509 passed 00:08:19.509 Test: blockdev write read 8 blocks ...passed 00:08:19.509 Test: blockdev write read size > 128k ...passed 00:08:19.509 Test: blockdev write read invalid size ...passed 00:08:19.509 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:19.509 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:19.509 Test: blockdev write read max offset ...passed 00:08:19.509 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:19.509 Test: blockdev writev readv 8 blocks ...passed 00:08:19.509 Test: blockdev writev readv 30 x 1block ...passed 00:08:19.509 Test: blockdev writev readv block ...passed 00:08:19.509 Test: blockdev writev readv size > 128k ...passed 00:08:19.509 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:19.509 Test: blockdev comparev and writev ...[2024-11-19 14:09:17.923304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x279904000 len:0x1000 00:08:19.509 [2024-11-19 14:09:17.923349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:19.509 passed 00:08:19.509 Test: blockdev nvme passthru rw ...passed 00:08:19.509 Test: blockdev nvme passthru vendor specific ...[2024-11-19 14:09:17.925256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:19.509 [2024-11-19 14:09:17.925286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:19.509 passed 00:08:19.509 Test: blockdev nvme admin passthru ...passed 00:08:19.509 Test: blockdev copy ...passed 00:08:19.509 Suite: bdevio tests on: Nvme2n2 00:08:19.509 Test: blockdev write read block ...passed 00:08:19.509 Test: blockdev write zeroes read block ...passed 00:08:19.509 Test: blockdev write zeroes read no split ...passed 00:08:19.509 Test: blockdev write zeroes read split ...passed 00:08:19.509 Test: blockdev write zeroes read split partial ...passed 00:08:19.509 Test: blockdev reset ...[2024-11-19 14:09:18.030468] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:19.509 [2024-11-19 14:09:18.033531] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:19.509 passed 00:08:19.509 Test: blockdev write read 8 blocks ...passed 00:08:19.509 Test: blockdev write read size > 128k ...passed 00:08:19.509 Test: blockdev write read invalid size ...passed 00:08:19.509 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:19.509 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:19.509 Test: blockdev write read max offset ...passed 00:08:19.509 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:19.509 Test: blockdev writev readv 8 blocks ...passed 00:08:19.509 Test: blockdev writev readv 30 x 1block ...passed 00:08:19.509 Test: blockdev writev readv block ...passed 00:08:19.509 Test: blockdev writev readv size > 128k ...passed 00:08:19.509 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:19.510 Test: blockdev comparev and writev ...[2024-11-19 14:09:18.052337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x279904000 len:0x1000 00:08:19.510 [2024-11-19 14:09:18.052379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:19.510 passed 00:08:19.510 Test: blockdev nvme passthru rw ...passed 00:08:19.510 Test: blockdev nvme passthru vendor specific ...[2024-11-19 14:09:18.054417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:19.510 [2024-11-19 14:09:18.054446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:19.510 passed 00:08:19.510 Test: blockdev nvme admin passthru ...passed 00:08:19.510 Test: blockdev copy ...passed 00:08:19.510 Suite: bdevio tests on: Nvme2n1 00:08:19.510 Test: blockdev write read block ...passed 00:08:19.774 Test: blockdev write zeroes read block ...passed 00:08:19.774 Test: blockdev write zeroes read no split ...passed 00:08:19.774 Test: blockdev write zeroes read split ...passed 00:08:19.774 Test: blockdev write zeroes read split partial ...passed 00:08:19.774 Test: blockdev reset ...[2024-11-19 14:09:18.172405] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:19.774 [2024-11-19 14:09:18.175478] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:19.774 passed 00:08:19.774 Test: blockdev write read 8 blocks ...passed 00:08:19.774 Test: blockdev write read size > 128k ...passed 00:08:19.774 Test: blockdev write read invalid size ...passed 00:08:19.774 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:19.774 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:19.774 Test: blockdev write read max offset ...passed 00:08:19.774 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:19.774 Test: blockdev writev readv 8 blocks ...passed 00:08:19.774 Test: blockdev writev readv 30 x 1block ...passed 00:08:19.774 Test: blockdev writev readv block ...passed 00:08:19.774 Test: blockdev writev readv size > 128k ...passed 00:08:19.774 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:19.774 Test: blockdev comparev and writev ...[2024-11-19 14:09:18.191291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x290c3c000 len:0x1000 00:08:19.774 [2024-11-19 14:09:18.191336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:19.774 passed 00:08:19.774 Test: blockdev nvme passthru rw ...passed 00:08:19.774 Test: blockdev nvme passthru vendor specific ...[2024-11-19 14:09:18.193734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:19.774 [2024-11-19 14:09:18.193764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:19.774 passed 00:08:19.774 Test: blockdev nvme admin passthru ...passed 00:08:19.774 Test: blockdev copy ...passed 00:08:19.774 Suite: bdevio tests on: Nvme1n1 00:08:19.774 Test: blockdev write read block ...passed 00:08:19.774 Test: blockdev write zeroes read block ...passed 00:08:20.031 Test: blockdev write zeroes read no split ...passed 00:08:20.031 Test: blockdev write zeroes read split ...passed 00:08:20.031 Test: blockdev write zeroes read split partial ...passed 00:08:20.031 Test: blockdev reset ...passed 00:08:20.031 Test: blockdev write read 8 blocks ...passed 00:08:20.031 Test: blockdev write read size > 128k ...passed 00:08:20.031 Test: blockdev write read invalid size ...passed 00:08:20.031 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:20.031 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:20.031 Test: blockdev write read max offset ...passed 00:08:20.031 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:20.031 Test: blockdev writev readv 8 blocks ...passed 00:08:20.031 Test: blockdev writev readv 30 x 1block ...passed 00:08:20.031 Test: blockdev writev readv block ...passed 00:08:20.031 Test: blockdev writev readv size > 128k ...passed 00:08:20.031 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:20.031 Test: blockdev comparev and writev ...[2024-11-19 14:09:18.315033] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:08:20.031 [2024-11-19 14:09:18.318633] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:20.031 [2024-11-19 14:09:18.335493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x290c38000 len:0x1000 00:08:20.031 [2024-11-19 14:09:18.335532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:20.031 passed 00:08:20.031 Test: blockdev nvme passthru rw ...passed 00:08:20.032 Test: blockdev nvme passthru vendor specific ...[2024-11-19 14:09:18.337260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:20.032 [2024-11-19 14:09:18.337287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:20.032 passed 00:08:20.032 Test: blockdev nvme admin passthru ...passed 00:08:20.032 Test: blockdev copy ...passed 00:08:20.032 Suite: bdevio tests on: Nvme0n1p2 00:08:20.032 Test: blockdev write read block ...passed 00:08:20.032 Test: blockdev write zeroes read block ...passed 00:08:20.032 Test: blockdev write zeroes read no split ...passed 00:08:20.032 Test: blockdev write zeroes read split ...passed 00:08:20.032 Test: blockdev write zeroes read split partial ...passed 00:08:20.032 Test: blockdev reset ...[2024-11-19 14:09:18.544868] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:08:20.032 [2024-11-19 14:09:18.548605] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:20.032 passed 00:08:20.032 Test: blockdev write read 8 blocks ...passed 00:08:20.032 Test: blockdev write read size > 128k ...passed 00:08:20.032 Test: blockdev write read invalid size ...passed 00:08:20.032 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:20.032 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:20.032 Test: blockdev write read max offset ...passed 00:08:20.032 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:20.032 Test: blockdev writev readv 8 blocks ...passed 00:08:20.032 Test: blockdev writev readv 30 x 1block ...passed 00:08:20.032 Test: blockdev writev readv block ...passed 00:08:20.032 Test: blockdev writev readv size > 128k ...passed 00:08:20.032 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:20.032 Test: blockdev comparev and writev ...passed 00:08:20.032 Test: blockdev nvme passthru rw ...passed 00:08:20.032 Test: blockdev nvme passthru vendor specific ...passed 00:08:20.032 Test: blockdev nvme admin passthru ...passed 00:08:20.032 Test: blockdev copy ...[2024-11-19 14:09:18.562577] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:08:20.032 separate metadata which is not supported yet. 00:08:20.032 passed 00:08:20.032 Suite: bdevio tests on: Nvme0n1p1 00:08:20.032 Test: blockdev write read block ...passed 00:08:20.032 Test: blockdev write zeroes read block ...passed 00:08:20.032 Test: blockdev write zeroes read no split ...passed 00:08:20.292 Test: blockdev write zeroes read split ...passed 00:08:20.292 Test: blockdev write zeroes read split partial ...passed 00:08:20.292 Test: blockdev reset ...[2024-11-19 14:09:18.632656] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:08:20.292 [2024-11-19 14:09:18.635917] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:20.292 passed 00:08:20.292 Test: blockdev write read 8 blocks ...passed 00:08:20.292 Test: blockdev write read size > 128k ...passed 00:08:20.292 Test: blockdev write read invalid size ...passed 00:08:20.292 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:20.292 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:20.292 Test: blockdev write read max offset ...passed 00:08:20.292 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:20.292 Test: blockdev writev readv 8 blocks ...passed 00:08:20.292 Test: blockdev writev readv 30 x 1block ...passed 00:08:20.292 Test: blockdev writev readv block ...passed 00:08:20.292 Test: blockdev writev readv size > 128k ...passed 00:08:20.292 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:20.292 Test: blockdev comparev and writev ...passed 00:08:20.292 Test: blockdev nvme passthru rw ...passed 00:08:20.292 Test: blockdev nvme passthru vendor specific ...passed 00:08:20.292 Test: blockdev nvme admin passthru ...passed 00:08:20.292 Test: blockdev copy ...[2024-11-19 14:09:18.650178] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:08:20.292 separate metadata which is not supported yet. 00:08:20.292 passed 00:08:20.292 00:08:20.292 Run Summary: Type Total Ran Passed Failed Inactive 00:08:20.292 suites 7 7 n/a 0 0 00:08:20.292 tests 161 161 161 0 0 00:08:20.292 asserts 1006 1006 1006 0 n/a 00:08:20.292 00:08:20.292 Elapsed time = 2.201 seconds 00:08:20.292 0 00:08:20.292 14:09:18 -- bdev/blockdev.sh@293 -- # killprocess 61977 00:08:20.292 14:09:18 -- common/autotest_common.sh@936 -- # '[' -z 61977 ']' 00:08:20.292 14:09:18 -- common/autotest_common.sh@940 -- # kill -0 61977 00:08:20.292 14:09:18 -- common/autotest_common.sh@941 -- # uname 00:08:20.292 14:09:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:20.292 14:09:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61977 00:08:20.292 14:09:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:20.292 14:09:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:20.292 14:09:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61977' 00:08:20.292 killing process with pid 61977 00:08:20.292 14:09:18 -- common/autotest_common.sh@955 -- # kill 61977 00:08:20.292 14:09:18 -- common/autotest_common.sh@960 -- # wait 61977 00:08:21.676 14:09:20 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:08:21.676 00:08:21.676 real 0m4.041s 00:08:21.676 user 0m10.554s 00:08:21.676 sys 0m0.370s 00:08:21.676 14:09:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:21.676 14:09:20 -- common/autotest_common.sh@10 -- # set +x 00:08:21.676 ************************************ 00:08:21.676 END TEST bdev_bounds 00:08:21.676 ************************************ 00:08:21.676 14:09:20 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:21.676 14:09:20 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:08:21.676 14:09:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:21.676 14:09:20 -- common/autotest_common.sh@10 -- # set +x 00:08:21.676 ************************************ 00:08:21.676 START TEST bdev_nbd 00:08:21.676 ************************************ 00:08:21.676 14:09:20 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:21.676 14:09:20 -- bdev/blockdev.sh@298 -- # uname -s 00:08:21.676 14:09:20 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:08:21.676 14:09:20 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:21.676 14:09:20 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:21.676 14:09:20 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:21.676 14:09:20 -- bdev/blockdev.sh@302 -- # local bdev_all 00:08:21.676 14:09:20 -- bdev/blockdev.sh@303 -- # local bdev_num=7 00:08:21.676 14:09:20 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:08:21.676 14:09:20 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:21.676 14:09:20 -- bdev/blockdev.sh@309 -- # local nbd_all 00:08:21.676 14:09:20 -- bdev/blockdev.sh@310 -- # bdev_num=7 00:08:21.676 14:09:20 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:21.676 14:09:20 -- bdev/blockdev.sh@312 -- # local nbd_list 00:08:21.676 14:09:20 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:21.676 14:09:20 -- bdev/blockdev.sh@313 -- # local bdev_list 00:08:21.676 14:09:20 -- bdev/blockdev.sh@316 -- # nbd_pid=62056 00:08:21.676 14:09:20 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:21.676 14:09:20 -- bdev/blockdev.sh@318 -- # waitforlisten 62056 /var/tmp/spdk-nbd.sock 00:08:21.676 14:09:20 -- common/autotest_common.sh@829 -- # '[' -z 62056 ']' 00:08:21.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:21.676 14:09:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:21.676 14:09:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:21.676 14:09:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:21.676 14:09:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:21.676 14:09:20 -- common/autotest_common.sh@10 -- # set +x 00:08:21.677 14:09:20 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:21.677 [2024-11-19 14:09:20.205732] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:21.677 [2024-11-19 14:09:20.205846] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.936 [2024-11-19 14:09:20.357979] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.194 [2024-11-19 14:09:20.547045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.568 14:09:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:23.568 14:09:21 -- common/autotest_common.sh@862 -- # return 0 00:08:23.568 14:09:21 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:23.568 14:09:21 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:23.568 14:09:21 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:23.568 14:09:21 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:23.568 14:09:21 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:23.568 14:09:21 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:23.568 14:09:21 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:23.569 14:09:21 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:23.569 14:09:21 -- bdev/nbd_common.sh@24 -- # local i 00:08:23.569 14:09:21 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:23.569 14:09:21 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:23.569 14:09:21 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:23.569 14:09:21 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:08:23.569 14:09:21 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:23.569 14:09:21 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:23.569 14:09:21 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:23.569 14:09:21 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:23.569 14:09:21 -- common/autotest_common.sh@867 -- # local i 00:08:23.569 14:09:21 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:23.569 14:09:21 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:23.569 14:09:21 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:23.569 14:09:21 -- common/autotest_common.sh@871 -- # break 00:08:23.569 14:09:21 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:23.569 14:09:21 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:23.569 14:09:21 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:23.569 1+0 records in 00:08:23.569 1+0 records out 00:08:23.569 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103064 s, 4.0 MB/s 00:08:23.569 14:09:21 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:23.569 14:09:21 -- common/autotest_common.sh@884 -- # size=4096 00:08:23.569 14:09:21 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:23.569 14:09:21 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:23.569 14:09:21 -- common/autotest_common.sh@887 -- # return 0 00:08:23.569 14:09:21 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:23.569 14:09:21 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:23.569 14:09:21 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:08:23.827 14:09:22 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:23.827 14:09:22 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:23.827 14:09:22 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:23.827 14:09:22 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:23.827 14:09:22 -- common/autotest_common.sh@867 -- # local i 00:08:23.827 14:09:22 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:23.827 14:09:22 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:23.827 14:09:22 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:23.827 14:09:22 -- common/autotest_common.sh@871 -- # break 00:08:23.827 14:09:22 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:23.827 14:09:22 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:23.827 14:09:22 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:23.827 1+0 records in 00:08:23.827 1+0 records out 00:08:23.827 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000834998 s, 4.9 MB/s 00:08:23.827 14:09:22 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:23.827 14:09:22 -- common/autotest_common.sh@884 -- # size=4096 00:08:23.827 14:09:22 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:23.827 14:09:22 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:23.827 14:09:22 -- common/autotest_common.sh@887 -- # return 0 00:08:23.827 14:09:22 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:23.827 14:09:22 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:23.827 14:09:22 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:23.827 14:09:22 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:23.827 14:09:22 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:23.827 14:09:22 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:23.827 14:09:22 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:08:23.827 14:09:22 -- common/autotest_common.sh@867 -- # local i 00:08:23.827 14:09:22 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:23.827 14:09:22 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:23.827 14:09:22 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:08:23.827 14:09:22 -- common/autotest_common.sh@871 -- # break 00:08:23.827 14:09:22 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:23.827 14:09:22 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:23.827 14:09:22 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:23.827 1+0 records in 00:08:23.827 1+0 records out 00:08:23.827 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000548109 s, 7.5 MB/s 00:08:23.827 14:09:22 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:23.827 14:09:22 -- common/autotest_common.sh@884 -- # size=4096 00:08:23.827 14:09:22 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.086 14:09:22 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:24.086 14:09:22 -- common/autotest_common.sh@887 -- # return 0 00:08:24.086 14:09:22 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:24.086 14:09:22 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:24.086 14:09:22 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:24.086 14:09:22 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:24.086 14:09:22 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:24.086 14:09:22 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:24.086 14:09:22 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:08:24.086 14:09:22 -- common/autotest_common.sh@867 -- # local i 00:08:24.086 14:09:22 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:24.086 14:09:22 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:24.086 14:09:22 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:08:24.086 14:09:22 -- common/autotest_common.sh@871 -- # break 00:08:24.086 14:09:22 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:24.086 14:09:22 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:24.086 14:09:22 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:24.086 1+0 records in 00:08:24.086 1+0 records out 00:08:24.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000962763 s, 4.3 MB/s 00:08:24.086 14:09:22 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.086 14:09:22 -- common/autotest_common.sh@884 -- # size=4096 00:08:24.086 14:09:22 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.086 14:09:22 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:24.086 14:09:22 -- common/autotest_common.sh@887 -- # return 0 00:08:24.086 14:09:22 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:24.086 14:09:22 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:24.086 14:09:22 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:24.346 14:09:22 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:24.346 14:09:22 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:24.346 14:09:22 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:24.346 14:09:22 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:08:24.346 14:09:22 -- common/autotest_common.sh@867 -- # local i 00:08:24.346 14:09:22 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:24.346 14:09:22 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:24.346 14:09:22 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:08:24.346 14:09:22 -- common/autotest_common.sh@871 -- # break 00:08:24.346 14:09:22 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:24.346 14:09:22 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:24.346 14:09:22 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:24.346 1+0 records in 00:08:24.346 1+0 records out 00:08:24.346 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000989704 s, 4.1 MB/s 00:08:24.346 14:09:22 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.346 14:09:22 -- common/autotest_common.sh@884 -- # size=4096 00:08:24.346 14:09:22 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.346 14:09:22 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:24.346 14:09:22 -- common/autotest_common.sh@887 -- # return 0 00:08:24.346 14:09:22 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:24.346 14:09:22 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:24.346 14:09:22 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:24.606 14:09:23 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:24.606 14:09:23 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:24.606 14:09:23 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:24.606 14:09:23 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:08:24.606 14:09:23 -- common/autotest_common.sh@867 -- # local i 00:08:24.606 14:09:23 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:24.606 14:09:23 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:24.606 14:09:23 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:08:24.606 14:09:23 -- common/autotest_common.sh@871 -- # break 00:08:24.606 14:09:23 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:24.606 14:09:23 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:24.606 14:09:23 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:24.606 1+0 records in 00:08:24.606 1+0 records out 00:08:24.606 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00138978 s, 2.9 MB/s 00:08:24.606 14:09:23 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.606 14:09:23 -- common/autotest_common.sh@884 -- # size=4096 00:08:24.606 14:09:23 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.606 14:09:23 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:24.606 14:09:23 -- common/autotest_common.sh@887 -- # return 0 00:08:24.606 14:09:23 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:24.606 14:09:23 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:24.606 14:09:23 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:24.866 14:09:23 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:24.866 14:09:23 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:24.866 14:09:23 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:24.866 14:09:23 -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:08:24.866 14:09:23 -- common/autotest_common.sh@867 -- # local i 00:08:24.866 14:09:23 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:24.866 14:09:23 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:24.866 14:09:23 -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:08:24.866 14:09:23 -- common/autotest_common.sh@871 -- # break 00:08:24.866 14:09:23 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:24.866 14:09:23 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:24.866 14:09:23 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:24.866 1+0 records in 00:08:24.866 1+0 records out 00:08:24.866 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000991247 s, 4.1 MB/s 00:08:24.866 14:09:23 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.866 14:09:23 -- common/autotest_common.sh@884 -- # size=4096 00:08:24.866 14:09:23 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.866 14:09:23 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:24.866 14:09:23 -- common/autotest_common.sh@887 -- # return 0 00:08:24.866 14:09:23 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:24.866 14:09:23 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:24.866 14:09:23 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:25.126 14:09:23 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:25.126 { 00:08:25.126 "nbd_device": "/dev/nbd0", 00:08:25.126 "bdev_name": "Nvme0n1p1" 00:08:25.126 }, 00:08:25.126 { 00:08:25.126 "nbd_device": "/dev/nbd1", 00:08:25.126 "bdev_name": "Nvme0n1p2" 00:08:25.126 }, 00:08:25.126 { 00:08:25.126 "nbd_device": "/dev/nbd2", 00:08:25.126 "bdev_name": "Nvme1n1" 00:08:25.126 }, 00:08:25.126 { 00:08:25.126 "nbd_device": "/dev/nbd3", 00:08:25.126 "bdev_name": "Nvme2n1" 00:08:25.126 }, 00:08:25.126 { 00:08:25.126 "nbd_device": "/dev/nbd4", 00:08:25.126 "bdev_name": "Nvme2n2" 00:08:25.126 }, 00:08:25.126 { 00:08:25.126 "nbd_device": "/dev/nbd5", 00:08:25.126 "bdev_name": "Nvme2n3" 00:08:25.126 }, 00:08:25.126 { 00:08:25.126 "nbd_device": "/dev/nbd6", 00:08:25.126 "bdev_name": "Nvme3n1" 00:08:25.126 } 00:08:25.126 ]' 00:08:25.126 14:09:23 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:25.126 14:09:23 -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:25.126 { 00:08:25.126 "nbd_device": "/dev/nbd0", 00:08:25.126 "bdev_name": "Nvme0n1p1" 00:08:25.126 }, 00:08:25.126 { 00:08:25.126 "nbd_device": "/dev/nbd1", 00:08:25.126 "bdev_name": "Nvme0n1p2" 00:08:25.126 }, 00:08:25.126 { 00:08:25.126 "nbd_device": "/dev/nbd2", 00:08:25.126 "bdev_name": "Nvme1n1" 00:08:25.126 }, 00:08:25.126 { 00:08:25.126 "nbd_device": "/dev/nbd3", 00:08:25.126 "bdev_name": "Nvme2n1" 00:08:25.126 }, 00:08:25.126 { 00:08:25.126 "nbd_device": "/dev/nbd4", 00:08:25.126 "bdev_name": "Nvme2n2" 00:08:25.126 }, 00:08:25.126 { 00:08:25.126 "nbd_device": "/dev/nbd5", 00:08:25.126 "bdev_name": "Nvme2n3" 00:08:25.126 }, 00:08:25.126 { 00:08:25.126 "nbd_device": "/dev/nbd6", 00:08:25.126 "bdev_name": "Nvme3n1" 00:08:25.126 } 00:08:25.126 ]' 00:08:25.126 14:09:23 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:25.126 14:09:23 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:25.126 14:09:23 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.126 14:09:23 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:25.126 14:09:23 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:25.126 14:09:23 -- bdev/nbd_common.sh@51 -- # local i 00:08:25.126 14:09:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:25.126 14:09:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:25.386 14:09:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:25.386 14:09:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:25.386 14:09:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:25.386 14:09:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:25.386 14:09:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:25.386 14:09:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:25.386 14:09:23 -- bdev/nbd_common.sh@41 -- # break 00:08:25.386 14:09:23 -- bdev/nbd_common.sh@45 -- # return 0 00:08:25.386 14:09:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:25.386 14:09:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:25.647 14:09:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:25.647 14:09:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:25.647 14:09:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:25.647 14:09:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:25.647 14:09:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:25.647 14:09:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:25.647 14:09:23 -- bdev/nbd_common.sh@41 -- # break 00:08:25.647 14:09:23 -- bdev/nbd_common.sh@45 -- # return 0 00:08:25.647 14:09:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:25.647 14:09:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:25.647 14:09:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:25.647 14:09:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:25.647 14:09:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:25.647 14:09:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:25.647 14:09:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:25.647 14:09:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:25.647 14:09:24 -- bdev/nbd_common.sh@41 -- # break 00:08:25.647 14:09:24 -- bdev/nbd_common.sh@45 -- # return 0 00:08:25.647 14:09:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:25.647 14:09:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:25.906 14:09:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:25.906 14:09:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:25.906 14:09:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:25.906 14:09:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:25.906 14:09:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:25.906 14:09:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:25.906 14:09:24 -- bdev/nbd_common.sh@41 -- # break 00:08:25.906 14:09:24 -- bdev/nbd_common.sh@45 -- # return 0 00:08:25.906 14:09:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:25.906 14:09:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:26.166 14:09:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:26.166 14:09:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:26.166 14:09:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:26.166 14:09:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.166 14:09:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.166 14:09:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:26.166 14:09:24 -- bdev/nbd_common.sh@41 -- # break 00:08:26.166 14:09:24 -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.166 14:09:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:26.166 14:09:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:26.451 14:09:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:26.451 14:09:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:26.451 14:09:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:26.451 14:09:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.451 14:09:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.451 14:09:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:26.451 14:09:24 -- bdev/nbd_common.sh@41 -- # break 00:08:26.451 14:09:24 -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.451 14:09:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:26.451 14:09:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:26.730 14:09:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:26.730 14:09:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:26.730 14:09:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:26.730 14:09:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.730 14:09:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.730 14:09:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:26.730 14:09:25 -- bdev/nbd_common.sh@41 -- # break 00:08:26.730 14:09:25 -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.730 14:09:25 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:26.730 14:09:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:26.730 14:09:25 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:26.730 14:09:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:26.730 14:09:25 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:26.730 14:09:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:26.991 14:09:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:26.991 14:09:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:26.991 14:09:25 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:26.991 14:09:25 -- bdev/nbd_common.sh@65 -- # true 00:08:26.991 14:09:25 -- bdev/nbd_common.sh@65 -- # count=0 00:08:26.991 14:09:25 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:26.991 14:09:25 -- bdev/nbd_common.sh@122 -- # count=0 00:08:26.991 14:09:25 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:26.991 14:09:25 -- bdev/nbd_common.sh@127 -- # return 0 00:08:26.991 14:09:25 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:26.991 14:09:25 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:26.991 14:09:25 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:26.991 14:09:25 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:26.991 14:09:25 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:26.991 14:09:25 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:26.991 14:09:25 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:26.991 14:09:25 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:26.991 14:09:25 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:26.991 14:09:25 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:26.991 14:09:25 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:26.991 14:09:25 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:26.991 14:09:25 -- bdev/nbd_common.sh@12 -- # local i 00:08:26.991 14:09:25 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:26.991 14:09:25 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:26.991 14:09:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:08:26.991 /dev/nbd0 00:08:26.991 14:09:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:26.991 14:09:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:26.991 14:09:25 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:26.991 14:09:25 -- common/autotest_common.sh@867 -- # local i 00:08:26.991 14:09:25 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:26.991 14:09:25 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:26.991 14:09:25 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:26.991 14:09:25 -- common/autotest_common.sh@871 -- # break 00:08:26.991 14:09:25 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:26.991 14:09:25 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:26.991 14:09:25 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:26.991 1+0 records in 00:08:26.991 1+0 records out 00:08:26.991 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00121087 s, 3.4 MB/s 00:08:26.991 14:09:25 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.253 14:09:25 -- common/autotest_common.sh@884 -- # size=4096 00:08:27.253 14:09:25 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.253 14:09:25 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:27.253 14:09:25 -- common/autotest_common.sh@887 -- # return 0 00:08:27.253 14:09:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:27.253 14:09:25 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:27.253 14:09:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:08:27.253 /dev/nbd1 00:08:27.253 14:09:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:27.253 14:09:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:27.253 14:09:25 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:27.253 14:09:25 -- common/autotest_common.sh@867 -- # local i 00:08:27.253 14:09:25 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:27.253 14:09:25 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:27.253 14:09:25 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:27.253 14:09:25 -- common/autotest_common.sh@871 -- # break 00:08:27.253 14:09:25 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:27.253 14:09:25 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:27.253 14:09:25 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:27.253 1+0 records in 00:08:27.253 1+0 records out 00:08:27.253 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100662 s, 4.1 MB/s 00:08:27.253 14:09:25 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.253 14:09:25 -- common/autotest_common.sh@884 -- # size=4096 00:08:27.253 14:09:25 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.253 14:09:25 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:27.253 14:09:25 -- common/autotest_common.sh@887 -- # return 0 00:08:27.253 14:09:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:27.253 14:09:25 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:27.253 14:09:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:08:27.513 /dev/nbd10 00:08:27.513 14:09:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:27.513 14:09:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:27.513 14:09:26 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:08:27.513 14:09:26 -- common/autotest_common.sh@867 -- # local i 00:08:27.513 14:09:26 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:27.513 14:09:26 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:27.513 14:09:26 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:08:27.513 14:09:26 -- common/autotest_common.sh@871 -- # break 00:08:27.513 14:09:26 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:27.513 14:09:26 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:27.513 14:09:26 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:27.513 1+0 records in 00:08:27.513 1+0 records out 00:08:27.513 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000946654 s, 4.3 MB/s 00:08:27.513 14:09:26 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.513 14:09:26 -- common/autotest_common.sh@884 -- # size=4096 00:08:27.513 14:09:26 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.513 14:09:26 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:27.513 14:09:26 -- common/autotest_common.sh@887 -- # return 0 00:08:27.513 14:09:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:27.513 14:09:26 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:27.513 14:09:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:27.773 /dev/nbd11 00:08:27.773 14:09:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:27.773 14:09:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:27.773 14:09:26 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:08:27.773 14:09:26 -- common/autotest_common.sh@867 -- # local i 00:08:27.773 14:09:26 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:27.773 14:09:26 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:27.773 14:09:26 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:08:27.773 14:09:26 -- common/autotest_common.sh@871 -- # break 00:08:27.773 14:09:26 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:27.773 14:09:26 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:27.773 14:09:26 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:27.773 1+0 records in 00:08:27.773 1+0 records out 00:08:27.773 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000851307 s, 4.8 MB/s 00:08:27.773 14:09:26 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.773 14:09:26 -- common/autotest_common.sh@884 -- # size=4096 00:08:27.773 14:09:26 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.773 14:09:26 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:27.773 14:09:26 -- common/autotest_common.sh@887 -- # return 0 00:08:27.773 14:09:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:27.773 14:09:26 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:27.773 14:09:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:28.033 /dev/nbd12 00:08:28.033 14:09:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:28.033 14:09:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:28.033 14:09:26 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:08:28.033 14:09:26 -- common/autotest_common.sh@867 -- # local i 00:08:28.033 14:09:26 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:28.033 14:09:26 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:28.033 14:09:26 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:08:28.034 14:09:26 -- common/autotest_common.sh@871 -- # break 00:08:28.034 14:09:26 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:28.034 14:09:26 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:28.034 14:09:26 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:28.034 1+0 records in 00:08:28.034 1+0 records out 00:08:28.034 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00133942 s, 3.1 MB/s 00:08:28.034 14:09:26 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.034 14:09:26 -- common/autotest_common.sh@884 -- # size=4096 00:08:28.034 14:09:26 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.034 14:09:26 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:28.034 14:09:26 -- common/autotest_common.sh@887 -- # return 0 00:08:28.034 14:09:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:28.034 14:09:26 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:28.034 14:09:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:28.294 /dev/nbd13 00:08:28.294 14:09:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:28.294 14:09:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:28.294 14:09:26 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:08:28.294 14:09:26 -- common/autotest_common.sh@867 -- # local i 00:08:28.294 14:09:26 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:28.294 14:09:26 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:28.294 14:09:26 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:08:28.294 14:09:26 -- common/autotest_common.sh@871 -- # break 00:08:28.294 14:09:26 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:28.294 14:09:26 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:28.294 14:09:26 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:28.294 1+0 records in 00:08:28.294 1+0 records out 00:08:28.294 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00131903 s, 3.1 MB/s 00:08:28.294 14:09:26 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.294 14:09:26 -- common/autotest_common.sh@884 -- # size=4096 00:08:28.294 14:09:26 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.294 14:09:26 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:28.294 14:09:26 -- common/autotest_common.sh@887 -- # return 0 00:08:28.294 14:09:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:28.294 14:09:26 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:28.294 14:09:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:28.555 /dev/nbd14 00:08:28.555 14:09:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:28.555 14:09:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:28.555 14:09:26 -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:08:28.555 14:09:26 -- common/autotest_common.sh@867 -- # local i 00:08:28.555 14:09:26 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:28.555 14:09:26 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:28.555 14:09:26 -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:08:28.555 14:09:26 -- common/autotest_common.sh@871 -- # break 00:08:28.555 14:09:26 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:28.555 14:09:26 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:28.555 14:09:26 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:28.555 1+0 records in 00:08:28.555 1+0 records out 00:08:28.555 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00129367 s, 3.2 MB/s 00:08:28.555 14:09:26 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.555 14:09:26 -- common/autotest_common.sh@884 -- # size=4096 00:08:28.555 14:09:26 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.555 14:09:26 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:28.555 14:09:26 -- common/autotest_common.sh@887 -- # return 0 00:08:28.555 14:09:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:28.555 14:09:26 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:28.555 14:09:26 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:28.555 14:09:26 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:28.555 14:09:26 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:28.815 14:09:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:28.815 { 00:08:28.815 "nbd_device": "/dev/nbd0", 00:08:28.815 "bdev_name": "Nvme0n1p1" 00:08:28.815 }, 00:08:28.815 { 00:08:28.815 "nbd_device": "/dev/nbd1", 00:08:28.815 "bdev_name": "Nvme0n1p2" 00:08:28.815 }, 00:08:28.815 { 00:08:28.815 "nbd_device": "/dev/nbd10", 00:08:28.815 "bdev_name": "Nvme1n1" 00:08:28.815 }, 00:08:28.815 { 00:08:28.815 "nbd_device": "/dev/nbd11", 00:08:28.815 "bdev_name": "Nvme2n1" 00:08:28.815 }, 00:08:28.815 { 00:08:28.815 "nbd_device": "/dev/nbd12", 00:08:28.815 "bdev_name": "Nvme2n2" 00:08:28.815 }, 00:08:28.815 { 00:08:28.815 "nbd_device": "/dev/nbd13", 00:08:28.815 "bdev_name": "Nvme2n3" 00:08:28.815 }, 00:08:28.815 { 00:08:28.815 "nbd_device": "/dev/nbd14", 00:08:28.815 "bdev_name": "Nvme3n1" 00:08:28.815 } 00:08:28.815 ]' 00:08:28.815 14:09:27 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:28.815 { 00:08:28.815 "nbd_device": "/dev/nbd0", 00:08:28.815 "bdev_name": "Nvme0n1p1" 00:08:28.815 }, 00:08:28.815 { 00:08:28.815 "nbd_device": "/dev/nbd1", 00:08:28.815 "bdev_name": "Nvme0n1p2" 00:08:28.815 }, 00:08:28.815 { 00:08:28.815 "nbd_device": "/dev/nbd10", 00:08:28.815 "bdev_name": "Nvme1n1" 00:08:28.815 }, 00:08:28.815 { 00:08:28.815 "nbd_device": "/dev/nbd11", 00:08:28.815 "bdev_name": "Nvme2n1" 00:08:28.815 }, 00:08:28.815 { 00:08:28.815 "nbd_device": "/dev/nbd12", 00:08:28.815 "bdev_name": "Nvme2n2" 00:08:28.815 }, 00:08:28.815 { 00:08:28.815 "nbd_device": "/dev/nbd13", 00:08:28.815 "bdev_name": "Nvme2n3" 00:08:28.815 }, 00:08:28.815 { 00:08:28.815 "nbd_device": "/dev/nbd14", 00:08:28.815 "bdev_name": "Nvme3n1" 00:08:28.815 } 00:08:28.815 ]' 00:08:28.815 14:09:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:28.815 14:09:27 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:28.815 /dev/nbd1 00:08:28.815 /dev/nbd10 00:08:28.815 /dev/nbd11 00:08:28.815 /dev/nbd12 00:08:28.815 /dev/nbd13 00:08:28.815 /dev/nbd14' 00:08:28.815 14:09:27 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:28.815 /dev/nbd1 00:08:28.815 /dev/nbd10 00:08:28.815 /dev/nbd11 00:08:28.815 /dev/nbd12 00:08:28.815 /dev/nbd13 00:08:28.815 /dev/nbd14' 00:08:28.815 14:09:27 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:28.815 14:09:27 -- bdev/nbd_common.sh@65 -- # count=7 00:08:28.815 14:09:27 -- bdev/nbd_common.sh@66 -- # echo 7 00:08:28.815 14:09:27 -- bdev/nbd_common.sh@95 -- # count=7 00:08:28.815 14:09:27 -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:28.815 14:09:27 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:28.815 14:09:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:28.815 14:09:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:28.815 14:09:27 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:28.815 14:09:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:28.815 14:09:27 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:28.815 14:09:27 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:28.815 256+0 records in 00:08:28.815 256+0 records out 00:08:28.815 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00760781 s, 138 MB/s 00:08:28.815 14:09:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:28.815 14:09:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:29.074 256+0 records in 00:08:29.074 256+0 records out 00:08:29.074 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.273415 s, 3.8 MB/s 00:08:29.074 14:09:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:29.074 14:09:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:29.333 256+0 records in 00:08:29.333 256+0 records out 00:08:29.333 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.232253 s, 4.5 MB/s 00:08:29.333 14:09:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:29.333 14:09:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:29.592 256+0 records in 00:08:29.592 256+0 records out 00:08:29.592 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.265437 s, 4.0 MB/s 00:08:29.592 14:09:28 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:29.592 14:09:28 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:29.851 256+0 records in 00:08:29.851 256+0 records out 00:08:29.851 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.208632 s, 5.0 MB/s 00:08:29.851 14:09:28 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:29.851 14:09:28 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:30.111 256+0 records in 00:08:30.111 256+0 records out 00:08:30.111 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.266241 s, 3.9 MB/s 00:08:30.111 14:09:28 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:30.111 14:09:28 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:30.373 256+0 records in 00:08:30.373 256+0 records out 00:08:30.373 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.191042 s, 5.5 MB/s 00:08:30.373 14:09:28 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:30.373 14:09:28 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:30.373 256+0 records in 00:08:30.373 256+0 records out 00:08:30.373 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148221 s, 7.1 MB/s 00:08:30.373 14:09:28 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:30.373 14:09:28 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:30.373 14:09:28 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:30.373 14:09:28 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:30.373 14:09:28 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:30.373 14:09:28 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:30.373 14:09:28 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:30.373 14:09:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:30.373 14:09:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:30.373 14:09:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:30.373 14:09:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:30.373 14:09:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:30.373 14:09:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:30.373 14:09:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:30.373 14:09:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:30.373 14:09:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:30.373 14:09:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:30.373 14:09:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:30.373 14:09:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:30.373 14:09:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:30.373 14:09:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:30.373 14:09:28 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:30.373 14:09:28 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:30.373 14:09:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:30.373 14:09:28 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:30.373 14:09:28 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:30.373 14:09:28 -- bdev/nbd_common.sh@51 -- # local i 00:08:30.373 14:09:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:30.373 14:09:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:30.634 14:09:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:30.634 14:09:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:30.634 14:09:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:30.634 14:09:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.634 14:09:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.634 14:09:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:30.634 14:09:29 -- bdev/nbd_common.sh@41 -- # break 00:08:30.634 14:09:29 -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.634 14:09:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:30.634 14:09:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:30.895 14:09:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:30.895 14:09:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:30.895 14:09:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:30.895 14:09:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.895 14:09:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.895 14:09:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:30.895 14:09:29 -- bdev/nbd_common.sh@41 -- # break 00:08:30.895 14:09:29 -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.895 14:09:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:30.895 14:09:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:31.155 14:09:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:31.156 14:09:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:31.156 14:09:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:31.156 14:09:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:31.156 14:09:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:31.156 14:09:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:31.156 14:09:29 -- bdev/nbd_common.sh@41 -- # break 00:08:31.156 14:09:29 -- bdev/nbd_common.sh@45 -- # return 0 00:08:31.156 14:09:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:31.156 14:09:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:31.156 14:09:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:31.156 14:09:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:31.156 14:09:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:31.156 14:09:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:31.156 14:09:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:31.156 14:09:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:31.416 14:09:29 -- bdev/nbd_common.sh@41 -- # break 00:08:31.416 14:09:29 -- bdev/nbd_common.sh@45 -- # return 0 00:08:31.416 14:09:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:31.416 14:09:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:31.416 14:09:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:31.416 14:09:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:31.416 14:09:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:31.416 14:09:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:31.416 14:09:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:31.416 14:09:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:31.417 14:09:29 -- bdev/nbd_common.sh@41 -- # break 00:08:31.417 14:09:29 -- bdev/nbd_common.sh@45 -- # return 0 00:08:31.417 14:09:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:31.417 14:09:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:31.677 14:09:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:31.677 14:09:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:31.677 14:09:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:31.677 14:09:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:31.677 14:09:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:31.677 14:09:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:31.677 14:09:30 -- bdev/nbd_common.sh@41 -- # break 00:08:31.677 14:09:30 -- bdev/nbd_common.sh@45 -- # return 0 00:08:31.677 14:09:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:31.677 14:09:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:31.938 14:09:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:31.938 14:09:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:31.938 14:09:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:31.938 14:09:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:31.938 14:09:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:31.938 14:09:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:31.938 14:09:30 -- bdev/nbd_common.sh@41 -- # break 00:08:31.938 14:09:30 -- bdev/nbd_common.sh@45 -- # return 0 00:08:31.938 14:09:30 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:31.938 14:09:30 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.938 14:09:30 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:32.199 14:09:30 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:32.199 14:09:30 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:32.199 14:09:30 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:32.199 14:09:30 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:32.199 14:09:30 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:32.199 14:09:30 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:32.199 14:09:30 -- bdev/nbd_common.sh@65 -- # true 00:08:32.199 14:09:30 -- bdev/nbd_common.sh@65 -- # count=0 00:08:32.199 14:09:30 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:32.199 14:09:30 -- bdev/nbd_common.sh@104 -- # count=0 00:08:32.199 14:09:30 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:32.200 14:09:30 -- bdev/nbd_common.sh@109 -- # return 0 00:08:32.200 14:09:30 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:32.200 14:09:30 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:32.200 14:09:30 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:32.200 14:09:30 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:32.200 14:09:30 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:32.200 14:09:30 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:32.200 malloc_lvol_verify 00:08:32.461 14:09:30 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:32.461 bff78266-ee99-45f1-a472-1cf2a4c9759b 00:08:32.461 14:09:30 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:32.723 137e766a-afd6-429b-98bc-2d37232df12e 00:08:32.723 14:09:31 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:32.984 /dev/nbd0 00:08:32.984 14:09:31 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:32.984 mke2fs 1.47.0 (5-Feb-2023) 00:08:32.984 Discarding device blocks: 0/4096 done 00:08:32.984 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:32.984 00:08:32.984 Allocating group tables: 0/1 done 00:08:32.984 Writing inode tables: 0/1 done 00:08:32.984 Creating journal (1024 blocks): done 00:08:32.984 Writing superblocks and filesystem accounting information: 0/1 done 00:08:32.984 00:08:32.984 14:09:31 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:32.984 14:09:31 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:32.984 14:09:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:32.984 14:09:31 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:32.984 14:09:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:32.984 14:09:31 -- bdev/nbd_common.sh@51 -- # local i 00:08:32.984 14:09:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:32.984 14:09:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:33.246 14:09:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:33.246 14:09:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:33.246 14:09:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:33.246 14:09:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:33.246 14:09:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:33.246 14:09:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:33.246 14:09:31 -- bdev/nbd_common.sh@41 -- # break 00:08:33.246 14:09:31 -- bdev/nbd_common.sh@45 -- # return 0 00:08:33.246 14:09:31 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:33.246 14:09:31 -- bdev/nbd_common.sh@147 -- # return 0 00:08:33.246 14:09:31 -- bdev/blockdev.sh@324 -- # killprocess 62056 00:08:33.246 14:09:31 -- common/autotest_common.sh@936 -- # '[' -z 62056 ']' 00:08:33.246 14:09:31 -- common/autotest_common.sh@940 -- # kill -0 62056 00:08:33.246 14:09:31 -- common/autotest_common.sh@941 -- # uname 00:08:33.246 14:09:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:33.246 14:09:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62056 00:08:33.246 killing process with pid 62056 00:08:33.246 14:09:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:33.246 14:09:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:33.246 14:09:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62056' 00:08:33.246 14:09:31 -- common/autotest_common.sh@955 -- # kill 62056 00:08:33.246 14:09:31 -- common/autotest_common.sh@960 -- # wait 62056 00:08:34.191 ************************************ 00:08:34.191 END TEST bdev_nbd 00:08:34.191 ************************************ 00:08:34.191 14:09:32 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:08:34.191 00:08:34.191 real 0m12.585s 00:08:34.191 user 0m16.773s 00:08:34.191 sys 0m3.938s 00:08:34.191 14:09:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:34.191 14:09:32 -- common/autotest_common.sh@10 -- # set +x 00:08:34.453 14:09:32 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:08:34.453 14:09:32 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:08:34.453 skipping fio tests on NVMe due to multi-ns failures. 00:08:34.453 14:09:32 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:08:34.453 14:09:32 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:34.453 14:09:32 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:34.453 14:09:32 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:34.453 14:09:32 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:08:34.453 14:09:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:34.453 14:09:32 -- common/autotest_common.sh@10 -- # set +x 00:08:34.453 ************************************ 00:08:34.453 START TEST bdev_verify 00:08:34.453 ************************************ 00:08:34.453 14:09:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:34.453 [2024-11-19 14:09:32.873398] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:34.453 [2024-11-19 14:09:32.873556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62493 ] 00:08:34.715 [2024-11-19 14:09:33.023286] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:34.976 [2024-11-19 14:09:33.303034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.976 [2024-11-19 14:09:33.303053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.548 Running I/O for 5 seconds... 00:08:40.858 00:08:40.858 Latency(us) 00:08:40.858 [2024-11-19T14:09:39.420Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.858 [2024-11-19T14:09:39.420Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:40.858 Verification LBA range: start 0x0 length 0x5e800 00:08:40.858 Nvme0n1p1 : 5.06 1895.53 7.40 0.00 0.00 67292.58 9628.75 77433.30 00:08:40.858 [2024-11-19T14:09:39.420Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:40.858 Verification LBA range: start 0x5e800 length 0x5e800 00:08:40.858 Nvme0n1p1 : 5.06 1869.56 7.30 0.00 0.00 68206.39 9124.63 79853.10 00:08:40.858 [2024-11-19T14:09:39.420Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:40.858 Verification LBA range: start 0x0 length 0x5e7ff 00:08:40.858 Nvme0n1p2 : 5.07 1893.75 7.40 0.00 0.00 67278.84 11494.01 75013.51 00:08:40.858 [2024-11-19T14:09:39.420Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:40.858 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:08:40.858 Nvme0n1p2 : 5.07 1873.80 7.32 0.00 0.00 67971.74 7360.20 70173.93 00:08:40.858 [2024-11-19T14:09:39.420Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:40.858 Verification LBA range: start 0x0 length 0xa0000 00:08:40.858 Nvme1n1 : 5.07 1898.59 7.42 0.00 0.00 66965.85 6276.33 64527.75 00:08:40.858 [2024-11-19T14:09:39.420Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:40.858 Verification LBA range: start 0xa0000 length 0xa0000 00:08:40.858 Nvme1n1 : 5.08 1871.39 7.31 0.00 0.00 67948.13 12855.14 66544.25 00:08:40.858 [2024-11-19T14:09:39.420Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:40.858 Verification LBA range: start 0x0 length 0x80000 00:08:40.858 Nvme2n1 : 5.08 1896.25 7.41 0.00 0.00 66913.95 11141.12 61704.66 00:08:40.858 [2024-11-19T14:09:39.420Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:40.858 Verification LBA range: start 0x80000 length 0x80000 00:08:40.858 Nvme2n1 : 5.09 1869.46 7.30 0.00 0.00 67896.36 16736.89 66140.95 00:08:40.858 [2024-11-19T14:09:39.420Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:40.858 Verification LBA range: start 0x0 length 0x80000 00:08:40.858 Nvme2n2 : 5.09 1894.35 7.40 0.00 0.00 66857.66 15526.99 60898.07 00:08:40.858 [2024-11-19T14:09:39.420Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:40.858 Verification LBA range: start 0x80000 length 0x80000 00:08:40.858 Nvme2n2 : 5.09 1867.66 7.30 0.00 0.00 67835.48 19761.62 63317.86 00:08:40.858 [2024-11-19T14:09:39.420Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:40.858 Verification LBA range: start 0x0 length 0x80000 00:08:40.858 Nvme2n3 : 5.09 1892.77 7.39 0.00 0.00 66820.38 18350.08 62107.96 00:08:40.858 [2024-11-19T14:09:39.420Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:40.858 Verification LBA range: start 0x80000 length 0x80000 00:08:40.858 Nvme2n3 : 5.10 1865.81 7.29 0.00 0.00 67770.67 23592.96 64527.75 00:08:40.858 [2024-11-19T14:09:39.420Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:40.858 Verification LBA range: start 0x0 length 0x20000 00:08:40.858 Nvme3n1 : 5.09 1891.36 7.39 0.00 0.00 66788.75 20769.87 61704.66 00:08:40.858 [2024-11-19T14:09:39.420Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:40.858 Verification LBA range: start 0x20000 length 0x20000 00:08:40.858 Nvme3n1 : 5.10 1864.95 7.28 0.00 0.00 67720.86 21878.94 64124.46 00:08:40.858 [2024-11-19T14:09:39.420Z] =================================================================================================================== 00:08:40.858 [2024-11-19T14:09:39.420Z] Total : 26345.21 102.91 0.00 0.00 67444.50 6276.33 79853.10 00:08:45.065 00:08:45.065 real 0m10.243s 00:08:45.065 user 0m17.074s 00:08:45.065 sys 0m0.456s 00:08:45.065 14:09:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:45.065 ************************************ 00:08:45.065 END TEST bdev_verify 00:08:45.065 ************************************ 00:08:45.065 14:09:43 -- common/autotest_common.sh@10 -- # set +x 00:08:45.065 14:09:43 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:45.065 14:09:43 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:08:45.065 14:09:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:45.065 14:09:43 -- common/autotest_common.sh@10 -- # set +x 00:08:45.065 ************************************ 00:08:45.065 START TEST bdev_verify_big_io 00:08:45.065 ************************************ 00:08:45.065 14:09:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:45.065 [2024-11-19 14:09:43.188383] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:45.065 [2024-11-19 14:09:43.188538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62612 ] 00:08:45.065 [2024-11-19 14:09:43.344911] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:45.065 [2024-11-19 14:09:43.614505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.065 [2024-11-19 14:09:43.614589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.006 Running I/O for 5 seconds... 00:08:52.613 00:08:52.613 Latency(us) 00:08:52.613 [2024-11-19T14:09:51.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.613 [2024-11-19T14:09:51.175Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:52.613 Verification LBA range: start 0x0 length 0x5e80 00:08:52.613 Nvme0n1p1 : 5.39 265.49 16.59 0.00 0.00 472958.79 25407.80 680767.80 00:08:52.613 [2024-11-19T14:09:51.175Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:52.613 Verification LBA range: start 0x5e80 length 0x5e80 00:08:52.613 Nvme0n1p1 : 5.41 163.54 10.22 0.00 0.00 766928.65 33272.12 1064707.94 00:08:52.613 [2024-11-19T14:09:51.175Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:52.613 Verification LBA range: start 0x0 length 0x5e7f 00:08:52.613 Nvme0n1p2 : 5.39 265.41 16.59 0.00 0.00 467672.02 25508.63 625919.21 00:08:52.613 [2024-11-19T14:09:51.175Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:52.613 Verification LBA range: start 0x5e7f length 0x5e7f 00:08:52.613 Nvme0n1p2 : 5.44 168.98 10.56 0.00 0.00 728496.68 24702.03 967916.31 00:08:52.613 [2024-11-19T14:09:51.175Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:52.613 Verification LBA range: start 0x0 length 0xa000 00:08:52.613 Nvme1n1 : 5.40 273.89 17.12 0.00 0.00 452515.22 5999.06 574297.01 00:08:52.613 [2024-11-19T14:09:51.175Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:52.613 Verification LBA range: start 0xa000 length 0xa000 00:08:52.613 Nvme1n1 : 5.44 168.94 10.56 0.00 0.00 711893.65 25105.33 864671.90 00:08:52.613 [2024-11-19T14:09:51.175Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:52.613 Verification LBA range: start 0x0 length 0x8000 00:08:52.613 Nvme2n1 : 5.40 273.80 17.11 0.00 0.00 447434.80 6553.60 529127.58 00:08:52.613 [2024-11-19T14:09:51.175Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:52.613 Verification LBA range: start 0x8000 length 0x8000 00:08:52.613 Nvme2n1 : 5.46 176.15 11.01 0.00 0.00 668060.08 19660.80 764653.88 00:08:52.613 [2024-11-19T14:09:51.175Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:52.613 Verification LBA range: start 0x0 length 0x8000 00:08:52.613 Nvme2n2 : 5.40 273.71 17.11 0.00 0.00 442318.48 6402.36 480731.77 00:08:52.613 [2024-11-19T14:09:51.175Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:52.613 Verification LBA range: start 0x8000 length 0x8000 00:08:52.613 Nvme2n2 : 5.53 204.07 12.75 0.00 0.00 566551.42 13107.20 677541.42 00:08:52.613 [2024-11-19T14:09:51.175Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:52.613 Verification LBA range: start 0x0 length 0x8000 00:08:52.613 Nvme2n3 : 5.40 273.62 17.10 0.00 0.00 437190.05 7057.72 480731.77 00:08:52.613 [2024-11-19T14:09:51.175Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:52.613 Verification LBA range: start 0x8000 length 0x8000 00:08:52.613 Nvme2n3 : 5.59 248.71 15.54 0.00 0.00 456440.88 6856.07 864671.90 00:08:52.613 [2024-11-19T14:09:51.175Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:52.613 Verification LBA range: start 0x0 length 0x2000 00:08:52.613 Nvme3n1 : 5.40 281.98 17.62 0.00 0.00 420084.18 683.72 490410.93 00:08:52.613 [2024-11-19T14:09:51.175Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:52.613 Verification LBA range: start 0x2000 length 0x2000 00:08:52.613 Nvme3n1 : 5.68 366.08 22.88 0.00 0.00 305629.58 412.75 877577.45 00:08:52.613 [2024-11-19T14:09:51.175Z] =================================================================================================================== 00:08:52.613 [2024-11-19T14:09:51.175Z] Total : 3404.38 212.77 0.00 0.00 493544.04 412.75 1064707.94 00:08:53.550 00:08:53.550 real 0m8.909s 00:08:53.550 user 0m16.453s 00:08:53.550 sys 0m0.403s 00:08:53.550 14:09:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:53.550 ************************************ 00:08:53.550 END TEST bdev_verify_big_io 00:08:53.550 ************************************ 00:08:53.550 14:09:52 -- common/autotest_common.sh@10 -- # set +x 00:08:53.550 14:09:52 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:53.550 14:09:52 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:53.550 14:09:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:53.550 14:09:52 -- common/autotest_common.sh@10 -- # set +x 00:08:53.550 ************************************ 00:08:53.550 START TEST bdev_write_zeroes 00:08:53.550 ************************************ 00:08:53.550 14:09:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:53.812 [2024-11-19 14:09:52.140377] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:53.812 [2024-11-19 14:09:52.140496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62721 ] 00:08:53.812 [2024-11-19 14:09:52.286693] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.073 [2024-11-19 14:09:52.491222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.644 Running I/O for 1 seconds... 00:08:55.583 00:08:55.583 Latency(us) 00:08:55.583 [2024-11-19T14:09:54.145Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.583 [2024-11-19T14:09:54.145Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:55.583 Nvme0n1p1 : 1.02 7427.93 29.02 0.00 0.00 17065.45 7511.43 32263.88 00:08:55.583 [2024-11-19T14:09:54.145Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:55.583 Nvme0n1p2 : 1.02 7383.31 28.84 0.00 0.00 17238.30 7057.72 32263.88 00:08:55.583 [2024-11-19T14:09:54.145Z] Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:55.583 Nvme1n1 : 1.02 7685.72 30.02 0.00 0.00 16422.15 6856.07 24601.21 00:08:55.583 [2024-11-19T14:09:54.145Z] Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:55.583 Nvme2n1 : 1.03 7677.03 29.99 0.00 0.00 16399.75 7108.14 24702.03 00:08:55.583 [2024-11-19T14:09:54.145Z] Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:55.583 Nvme2n2 : 1.03 7668.42 29.95 0.00 0.00 16372.12 7410.61 25004.50 00:08:55.583 [2024-11-19T14:09:54.145Z] Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:55.583 Nvme2n3 : 1.03 7659.82 29.92 0.00 0.00 16351.32 7662.67 24802.86 00:08:55.583 [2024-11-19T14:09:54.145Z] Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:55.583 Nvme3n1 : 1.03 7710.16 30.12 0.00 0.00 16241.03 5419.32 25508.63 00:08:55.583 [2024-11-19T14:09:54.145Z] =================================================================================================================== 00:08:55.583 [2024-11-19T14:09:54.145Z] Total : 53212.39 207.86 0.00 0.00 16576.86 5419.32 32263.88 00:08:56.969 00:08:56.969 real 0m3.071s 00:08:56.969 user 0m2.707s 00:08:56.969 sys 0m0.239s 00:08:56.969 14:09:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:56.969 ************************************ 00:08:56.969 END TEST bdev_write_zeroes 00:08:56.969 14:09:55 -- common/autotest_common.sh@10 -- # set +x 00:08:56.969 ************************************ 00:08:56.969 14:09:55 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:56.969 14:09:55 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:56.969 14:09:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:56.969 14:09:55 -- common/autotest_common.sh@10 -- # set +x 00:08:56.969 ************************************ 00:08:56.969 START TEST bdev_json_nonenclosed 00:08:56.969 ************************************ 00:08:56.969 14:09:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:56.969 [2024-11-19 14:09:55.303289] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:56.969 [2024-11-19 14:09:55.303449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62774 ] 00:08:56.969 [2024-11-19 14:09:55.460287] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.229 [2024-11-19 14:09:55.730714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.229 [2024-11-19 14:09:55.730954] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:57.229 [2024-11-19 14:09:55.730986] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:57.800 00:08:57.800 real 0m0.856s 00:08:57.800 user 0m0.594s 00:08:57.800 sys 0m0.154s 00:08:57.800 14:09:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:57.800 14:09:56 -- common/autotest_common.sh@10 -- # set +x 00:08:57.800 ************************************ 00:08:57.800 END TEST bdev_json_nonenclosed 00:08:57.800 ************************************ 00:08:57.800 14:09:56 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:57.800 14:09:56 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:57.800 14:09:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:57.800 14:09:56 -- common/autotest_common.sh@10 -- # set +x 00:08:57.800 ************************************ 00:08:57.800 START TEST bdev_json_nonarray 00:08:57.800 ************************************ 00:08:57.800 14:09:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:57.800 [2024-11-19 14:09:56.211675] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:57.800 [2024-11-19 14:09:56.211819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62805 ] 00:08:58.061 [2024-11-19 14:09:56.365645] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.322 [2024-11-19 14:09:56.636377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.322 [2024-11-19 14:09:56.636634] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:58.322 [2024-11-19 14:09:56.636658] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:58.583 00:08:58.583 real 0m0.850s 00:08:58.583 user 0m0.600s 00:08:58.583 sys 0m0.141s 00:08:58.583 14:09:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:58.583 ************************************ 00:08:58.583 END TEST bdev_json_nonarray 00:08:58.583 ************************************ 00:08:58.583 14:09:56 -- common/autotest_common.sh@10 -- # set +x 00:08:58.583 14:09:57 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:08:58.583 14:09:57 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:08:58.583 14:09:57 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:58.583 14:09:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:58.583 14:09:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:58.583 14:09:57 -- common/autotest_common.sh@10 -- # set +x 00:08:58.583 ************************************ 00:08:58.583 START TEST bdev_gpt_uuid 00:08:58.583 ************************************ 00:08:58.583 14:09:57 -- common/autotest_common.sh@1114 -- # bdev_gpt_uuid 00:08:58.583 14:09:57 -- bdev/blockdev.sh@612 -- # local bdev 00:08:58.583 14:09:57 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:08:58.583 14:09:57 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=62836 00:08:58.583 14:09:57 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:58.583 14:09:57 -- bdev/blockdev.sh@47 -- # waitforlisten 62836 00:08:58.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.583 14:09:57 -- common/autotest_common.sh@829 -- # '[' -z 62836 ']' 00:08:58.583 14:09:57 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:58.583 14:09:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.583 14:09:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:58.583 14:09:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.583 14:09:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:58.583 14:09:57 -- common/autotest_common.sh@10 -- # set +x 00:08:58.844 [2024-11-19 14:09:57.143704] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:58.844 [2024-11-19 14:09:57.143890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62836 ] 00:08:58.844 [2024-11-19 14:09:57.300975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.106 [2024-11-19 14:09:57.577697] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:59.106 [2024-11-19 14:09:57.577982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.493 14:09:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:00.493 14:09:58 -- common/autotest_common.sh@862 -- # return 0 00:09:00.493 14:09:58 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:00.493 14:09:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.493 14:09:58 -- common/autotest_common.sh@10 -- # set +x 00:09:00.493 Some configs were skipped because the RPC state that can call them passed over. 00:09:00.493 14:09:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.493 14:09:59 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:09:00.493 14:09:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.493 14:09:59 -- common/autotest_common.sh@10 -- # set +x 00:09:00.493 14:09:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.493 14:09:59 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:00.493 14:09:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.493 14:09:59 -- common/autotest_common.sh@10 -- # set +x 00:09:00.493 14:09:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.493 14:09:59 -- bdev/blockdev.sh@619 -- # bdev='[ 00:09:00.493 { 00:09:00.493 "name": "Nvme0n1p1", 00:09:00.493 "aliases": [ 00:09:00.493 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:00.493 ], 00:09:00.493 "product_name": "GPT Disk", 00:09:00.493 "block_size": 4096, 00:09:00.493 "num_blocks": 774144, 00:09:00.493 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:00.493 "md_size": 64, 00:09:00.493 "md_interleave": false, 00:09:00.493 "dif_type": 0, 00:09:00.493 "assigned_rate_limits": { 00:09:00.493 "rw_ios_per_sec": 0, 00:09:00.493 "rw_mbytes_per_sec": 0, 00:09:00.493 "r_mbytes_per_sec": 0, 00:09:00.493 "w_mbytes_per_sec": 0 00:09:00.493 }, 00:09:00.493 "claimed": false, 00:09:00.493 "zoned": false, 00:09:00.493 "supported_io_types": { 00:09:00.493 "read": true, 00:09:00.493 "write": true, 00:09:00.493 "unmap": true, 00:09:00.493 "write_zeroes": true, 00:09:00.493 "flush": true, 00:09:00.493 "reset": true, 00:09:00.493 "compare": true, 00:09:00.493 "compare_and_write": false, 00:09:00.493 "abort": true, 00:09:00.493 "nvme_admin": false, 00:09:00.493 "nvme_io": false 00:09:00.493 }, 00:09:00.493 "driver_specific": { 00:09:00.493 "gpt": { 00:09:00.493 "base_bdev": "Nvme0n1", 00:09:00.493 "offset_blocks": 256, 00:09:00.493 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:00.493 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:00.493 "partition_name": "SPDK_TEST_first" 00:09:00.493 } 00:09:00.493 } 00:09:00.493 } 00:09:00.493 ]' 00:09:00.493 14:09:59 -- bdev/blockdev.sh@620 -- # jq -r length 00:09:00.755 14:09:59 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:09:00.755 14:09:59 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:09:00.755 14:09:59 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:00.755 14:09:59 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:00.755 14:09:59 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:00.755 14:09:59 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:00.755 14:09:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.755 14:09:59 -- common/autotest_common.sh@10 -- # set +x 00:09:00.755 14:09:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.755 14:09:59 -- bdev/blockdev.sh@624 -- # bdev='[ 00:09:00.755 { 00:09:00.755 "name": "Nvme0n1p2", 00:09:00.755 "aliases": [ 00:09:00.755 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:00.755 ], 00:09:00.755 "product_name": "GPT Disk", 00:09:00.755 "block_size": 4096, 00:09:00.755 "num_blocks": 774143, 00:09:00.755 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:00.755 "md_size": 64, 00:09:00.755 "md_interleave": false, 00:09:00.755 "dif_type": 0, 00:09:00.755 "assigned_rate_limits": { 00:09:00.755 "rw_ios_per_sec": 0, 00:09:00.755 "rw_mbytes_per_sec": 0, 00:09:00.755 "r_mbytes_per_sec": 0, 00:09:00.755 "w_mbytes_per_sec": 0 00:09:00.755 }, 00:09:00.755 "claimed": false, 00:09:00.755 "zoned": false, 00:09:00.755 "supported_io_types": { 00:09:00.755 "read": true, 00:09:00.755 "write": true, 00:09:00.755 "unmap": true, 00:09:00.755 "write_zeroes": true, 00:09:00.755 "flush": true, 00:09:00.755 "reset": true, 00:09:00.755 "compare": true, 00:09:00.755 "compare_and_write": false, 00:09:00.755 "abort": true, 00:09:00.755 "nvme_admin": false, 00:09:00.755 "nvme_io": false 00:09:00.755 }, 00:09:00.755 "driver_specific": { 00:09:00.755 "gpt": { 00:09:00.755 "base_bdev": "Nvme0n1", 00:09:00.755 "offset_blocks": 774400, 00:09:00.755 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:00.755 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:00.755 "partition_name": "SPDK_TEST_second" 00:09:00.755 } 00:09:00.755 } 00:09:00.755 } 00:09:00.755 ]' 00:09:00.755 14:09:59 -- bdev/blockdev.sh@625 -- # jq -r length 00:09:00.755 14:09:59 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:09:00.755 14:09:59 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:09:00.755 14:09:59 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:00.755 14:09:59 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:00.755 14:09:59 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:00.755 14:09:59 -- bdev/blockdev.sh@629 -- # killprocess 62836 00:09:00.755 14:09:59 -- common/autotest_common.sh@936 -- # '[' -z 62836 ']' 00:09:00.755 14:09:59 -- common/autotest_common.sh@940 -- # kill -0 62836 00:09:00.755 14:09:59 -- common/autotest_common.sh@941 -- # uname 00:09:00.755 14:09:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:00.755 14:09:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62836 00:09:00.755 14:09:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:00.755 killing process with pid 62836 00:09:00.755 14:09:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:00.755 14:09:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62836' 00:09:00.755 14:09:59 -- common/autotest_common.sh@955 -- # kill 62836 00:09:00.755 14:09:59 -- common/autotest_common.sh@960 -- # wait 62836 00:09:02.666 00:09:02.666 real 0m3.724s 00:09:02.666 user 0m3.841s 00:09:02.666 sys 0m0.619s 00:09:02.666 14:10:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:02.666 ************************************ 00:09:02.666 END TEST bdev_gpt_uuid 00:09:02.666 ************************************ 00:09:02.666 14:10:00 -- common/autotest_common.sh@10 -- # set +x 00:09:02.666 14:10:00 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:09:02.666 14:10:00 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:09:02.666 14:10:00 -- bdev/blockdev.sh@809 -- # cleanup 00:09:02.666 14:10:00 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:02.666 14:10:00 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:02.666 14:10:00 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:09:02.666 14:10:00 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:09:02.666 14:10:00 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:09:02.666 14:10:00 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:02.924 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:02.924 Waiting for block devices as requested 00:09:02.924 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:09:02.924 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:09:02.924 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:09:03.236 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:09:08.548 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:09:08.548 14:10:06 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme2n1 ]] 00:09:08.548 14:10:06 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme2n1 00:09:08.548 /dev/nvme2n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:08.548 /dev/nvme2n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:09:08.548 /dev/nvme2n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:08.548 /dev/nvme2n1: calling ioctl to re-read partition table: Success 00:09:08.548 14:10:06 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:09:08.548 00:09:08.548 real 1m6.164s 00:09:08.548 user 1m24.512s 00:09:08.548 sys 0m9.472s 00:09:08.548 14:10:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:08.548 ************************************ 00:09:08.548 END TEST blockdev_nvme_gpt 00:09:08.548 ************************************ 00:09:08.548 14:10:06 -- common/autotest_common.sh@10 -- # set +x 00:09:08.548 14:10:06 -- spdk/autotest.sh@209 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:08.548 14:10:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:08.548 14:10:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:08.548 14:10:06 -- common/autotest_common.sh@10 -- # set +x 00:09:08.548 ************************************ 00:09:08.548 START TEST nvme 00:09:08.548 ************************************ 00:09:08.548 14:10:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:08.548 * Looking for test storage... 00:09:08.548 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:08.548 14:10:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:08.548 14:10:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:08.548 14:10:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:08.548 14:10:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:08.548 14:10:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:08.549 14:10:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:08.549 14:10:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:08.549 14:10:07 -- scripts/common.sh@335 -- # IFS=.-: 00:09:08.549 14:10:07 -- scripts/common.sh@335 -- # read -ra ver1 00:09:08.549 14:10:07 -- scripts/common.sh@336 -- # IFS=.-: 00:09:08.549 14:10:07 -- scripts/common.sh@336 -- # read -ra ver2 00:09:08.549 14:10:07 -- scripts/common.sh@337 -- # local 'op=<' 00:09:08.549 14:10:07 -- scripts/common.sh@339 -- # ver1_l=2 00:09:08.549 14:10:07 -- scripts/common.sh@340 -- # ver2_l=1 00:09:08.549 14:10:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:08.549 14:10:07 -- scripts/common.sh@343 -- # case "$op" in 00:09:08.549 14:10:07 -- scripts/common.sh@344 -- # : 1 00:09:08.549 14:10:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:08.549 14:10:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:08.549 14:10:07 -- scripts/common.sh@364 -- # decimal 1 00:09:08.549 14:10:07 -- scripts/common.sh@352 -- # local d=1 00:09:08.549 14:10:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:08.549 14:10:07 -- scripts/common.sh@354 -- # echo 1 00:09:08.549 14:10:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:08.549 14:10:07 -- scripts/common.sh@365 -- # decimal 2 00:09:08.549 14:10:07 -- scripts/common.sh@352 -- # local d=2 00:09:08.549 14:10:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:08.549 14:10:07 -- scripts/common.sh@354 -- # echo 2 00:09:08.549 14:10:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:08.549 14:10:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:08.549 14:10:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:08.549 14:10:07 -- scripts/common.sh@367 -- # return 0 00:09:08.549 14:10:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:08.549 14:10:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:08.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.549 --rc genhtml_branch_coverage=1 00:09:08.549 --rc genhtml_function_coverage=1 00:09:08.549 --rc genhtml_legend=1 00:09:08.549 --rc geninfo_all_blocks=1 00:09:08.549 --rc geninfo_unexecuted_blocks=1 00:09:08.549 00:09:08.549 ' 00:09:08.549 14:10:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:08.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.549 --rc genhtml_branch_coverage=1 00:09:08.549 --rc genhtml_function_coverage=1 00:09:08.549 --rc genhtml_legend=1 00:09:08.549 --rc geninfo_all_blocks=1 00:09:08.549 --rc geninfo_unexecuted_blocks=1 00:09:08.549 00:09:08.549 ' 00:09:08.549 14:10:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:08.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.549 --rc genhtml_branch_coverage=1 00:09:08.549 --rc genhtml_function_coverage=1 00:09:08.549 --rc genhtml_legend=1 00:09:08.549 --rc geninfo_all_blocks=1 00:09:08.549 --rc geninfo_unexecuted_blocks=1 00:09:08.549 00:09:08.549 ' 00:09:08.549 14:10:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:08.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.549 --rc genhtml_branch_coverage=1 00:09:08.549 --rc genhtml_function_coverage=1 00:09:08.549 --rc genhtml_legend=1 00:09:08.549 --rc geninfo_all_blocks=1 00:09:08.549 --rc geninfo_unexecuted_blocks=1 00:09:08.549 00:09:08.549 ' 00:09:08.549 14:10:07 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:09.487 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:09.487 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:09:09.487 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:09:09.487 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:09:09.487 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:09:09.748 14:10:08 -- nvme/nvme.sh@79 -- # uname 00:09:09.748 14:10:08 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:09.748 14:10:08 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:09.748 14:10:08 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:09.748 14:10:08 -- common/autotest_common.sh@1068 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:09.748 14:10:08 -- common/autotest_common.sh@1054 -- # _randomize_va_space=2 00:09:09.748 14:10:08 -- common/autotest_common.sh@1055 -- # echo 0 00:09:09.748 14:10:08 -- common/autotest_common.sh@1057 -- # stubpid=63503 00:09:09.748 Waiting for stub to ready for secondary processes... 00:09:09.748 14:10:08 -- common/autotest_common.sh@1058 -- # echo Waiting for stub to ready for secondary processes... 00:09:09.748 14:10:08 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:09.748 14:10:08 -- common/autotest_common.sh@1061 -- # [[ -e /proc/63503 ]] 00:09:09.748 14:10:08 -- common/autotest_common.sh@1062 -- # sleep 1s 00:09:09.748 14:10:08 -- common/autotest_common.sh@1056 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:09.748 [2024-11-19 14:10:08.172569] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:09.748 [2024-11-19 14:10:08.172669] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.689 [2024-11-19 14:10:08.905971] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:10.689 [2024-11-19 14:10:09.073082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:10.689 [2024-11-19 14:10:09.073333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.689 [2024-11-19 14:10:09.073350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:10.689 [2024-11-19 14:10:09.091624] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:10.689 [2024-11-19 14:10:09.103436] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:10.689 [2024-11-19 14:10:09.103565] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:10.689 [2024-11-19 14:10:09.118285] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:10.689 [2024-11-19 14:10:09.118419] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:10.689 [2024-11-19 14:10:09.118511] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:10.689 [2024-11-19 14:10:09.126163] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:10.689 [2024-11-19 14:10:09.126318] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:10.689 [2024-11-19 14:10:09.126415] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:10.689 [2024-11-19 14:10:09.133824] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:10.689 [2024-11-19 14:10:09.133963] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:10.689 [2024-11-19 14:10:09.134052] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:10.689 [2024-11-19 14:10:09.134136] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:10.689 [2024-11-19 14:10:09.134252] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:09:10.689 14:10:09 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:10.689 done. 00:09:10.689 14:10:09 -- common/autotest_common.sh@1064 -- # echo done. 00:09:10.689 14:10:09 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:10.689 14:10:09 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:09:10.689 14:10:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:10.689 14:10:09 -- common/autotest_common.sh@10 -- # set +x 00:09:10.689 ************************************ 00:09:10.689 START TEST nvme_reset 00:09:10.689 ************************************ 00:09:10.689 14:10:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:10.950 Initializing NVMe Controllers 00:09:10.950 Skipping QEMU NVMe SSD at 0000:00:06.0 00:09:10.950 Skipping QEMU NVMe SSD at 0000:00:07.0 00:09:10.950 Skipping QEMU NVMe SSD at 0000:00:09.0 00:09:10.950 Skipping QEMU NVMe SSD at 0000:00:08.0 00:09:10.950 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:09:10.950 00:09:10.950 real 0m0.201s 00:09:10.950 user 0m0.058s 00:09:10.950 sys 0m0.098s 00:09:10.950 14:10:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:10.950 ************************************ 00:09:10.950 END TEST nvme_reset 00:09:10.950 14:10:09 -- common/autotest_common.sh@10 -- # set +x 00:09:10.950 ************************************ 00:09:10.950 14:10:09 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:09:10.950 14:10:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:10.950 14:10:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:10.950 14:10:09 -- common/autotest_common.sh@10 -- # set +x 00:09:10.950 ************************************ 00:09:10.950 START TEST nvme_identify 00:09:10.950 ************************************ 00:09:10.950 14:10:09 -- common/autotest_common.sh@1114 -- # nvme_identify 00:09:10.950 14:10:09 -- nvme/nvme.sh@12 -- # bdfs=() 00:09:10.950 14:10:09 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:09:10.950 14:10:09 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:09:10.950 14:10:09 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:09:10.950 14:10:09 -- common/autotest_common.sh@1508 -- # bdfs=() 00:09:10.950 14:10:09 -- common/autotest_common.sh@1508 -- # local bdfs 00:09:10.950 14:10:09 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:10.950 14:10:09 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:10.950 14:10:09 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:09:10.950 14:10:09 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:09:10.950 14:10:09 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:09:10.950 14:10:09 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:09:11.215 [2024-11-19 14:10:09.658891] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 63533 terminated unexpected 00:09:11.215 ===================================================== 00:09:11.215 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:11.215 ===================================================== 00:09:11.215 Controller Capabilities/Features 00:09:11.215 ================================ 00:09:11.215 Vendor ID: 1b36 00:09:11.215 Subsystem Vendor ID: 1af4 00:09:11.215 Serial Number: 12340 00:09:11.215 Model Number: QEMU NVMe Ctrl 00:09:11.215 Firmware Version: 8.0.0 00:09:11.215 Recommended Arb Burst: 6 00:09:11.215 IEEE OUI Identifier: 00 54 52 00:09:11.215 Multi-path I/O 00:09:11.215 May have multiple subsystem ports: No 00:09:11.215 May have multiple controllers: No 00:09:11.215 Associated with SR-IOV VF: No 00:09:11.215 Max Data Transfer Size: 524288 00:09:11.215 Max Number of Namespaces: 256 00:09:11.215 Max Number of I/O Queues: 64 00:09:11.215 NVMe Specification Version (VS): 1.4 00:09:11.215 NVMe Specification Version (Identify): 1.4 00:09:11.215 Maximum Queue Entries: 2048 00:09:11.215 Contiguous Queues Required: Yes 00:09:11.215 Arbitration Mechanisms Supported 00:09:11.215 Weighted Round Robin: Not Supported 00:09:11.215 Vendor Specific: Not Supported 00:09:11.215 Reset Timeout: 7500 ms 00:09:11.215 Doorbell Stride: 4 bytes 00:09:11.215 NVM Subsystem Reset: Not Supported 00:09:11.215 Command Sets Supported 00:09:11.215 NVM Command Set: Supported 00:09:11.215 Boot Partition: Not Supported 00:09:11.215 Memory Page Size Minimum: 4096 bytes 00:09:11.215 Memory Page Size Maximum: 65536 bytes 00:09:11.215 Persistent Memory Region: Not Supported 00:09:11.215 Optional Asynchronous Events Supported 00:09:11.215 Namespace Attribute Notices: Supported 00:09:11.215 Firmware Activation Notices: Not Supported 00:09:11.215 ANA Change Notices: Not Supported 00:09:11.215 PLE Aggregate Log Change Notices: Not Supported 00:09:11.215 LBA Status Info Alert Notices: Not Supported 00:09:11.215 EGE Aggregate Log Change Notices: Not Supported 00:09:11.215 Normal NVM Subsystem Shutdown event: Not Supported 00:09:11.215 Zone Descriptor Change Notices: Not Supported 00:09:11.215 Discovery Log Change Notices: Not Supported 00:09:11.215 Controller Attributes 00:09:11.215 128-bit Host Identifier: Not Supported 00:09:11.215 Non-Operational Permissive Mode: Not Supported 00:09:11.215 NVM Sets: Not Supported 00:09:11.215 Read Recovery Levels: Not Supported 00:09:11.215 Endurance Groups: Not Supported 00:09:11.215 Predictable Latency Mode: Not Supported 00:09:11.215 Traffic Based Keep ALive: Not Supported 00:09:11.215 Namespace Granularity: Not Supported 00:09:11.215 SQ Associations: Not Supported 00:09:11.215 UUID List: Not Supported 00:09:11.215 Multi-Domain Subsystem: Not Supported 00:09:11.215 Fixed Capacity Management: Not Supported 00:09:11.215 Variable Capacity Management: Not Supported 00:09:11.215 Delete Endurance Group: Not Supported 00:09:11.215 Delete NVM Set: Not Supported 00:09:11.215 Extended LBA Formats Supported: Supported 00:09:11.215 Flexible Data Placement Supported: Not Supported 00:09:11.215 00:09:11.215 Controller Memory Buffer Support 00:09:11.215 ================================ 00:09:11.215 Supported: No 00:09:11.215 00:09:11.215 Persistent Memory Region Support 00:09:11.215 ================================ 00:09:11.215 Supported: No 00:09:11.215 00:09:11.215 Admin Command Set Attributes 00:09:11.215 ============================ 00:09:11.215 Security Send/Receive: Not Supported 00:09:11.215 Format NVM: Supported 00:09:11.215 Firmware Activate/Download: Not Supported 00:09:11.215 Namespace Management: Supported 00:09:11.215 Device Self-Test: Not Supported 00:09:11.215 Directives: Supported 00:09:11.215 NVMe-MI: Not Supported 00:09:11.215 Virtualization Management: Not Supported 00:09:11.215 Doorbell Buffer Config: Supported 00:09:11.215 Get LBA Status Capability: Not Supported 00:09:11.215 Command & Feature Lockdown Capability: Not Supported 00:09:11.215 Abort Command Limit: 4 00:09:11.215 Async Event Request Limit: 4 00:09:11.215 Number of Firmware Slots: N/A 00:09:11.215 Firmware Slot 1 Read-Only: N/A 00:09:11.215 Firmware Activation Without Reset: N/A 00:09:11.215 Multiple Update Detection Support: N/A 00:09:11.215 Firmware Update Granularity: No Information Provided 00:09:11.215 Per-Namespace SMART Log: Yes 00:09:11.215 Asymmetric Namespace Access Log Page: Not Supported 00:09:11.215 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:11.215 Command Effects Log Page: Supported 00:09:11.215 Get Log Page Extended Data: Supported 00:09:11.215 Telemetry Log Pages: Not Supported 00:09:11.215 Persistent Event Log Pages: Not Supported 00:09:11.215 Supported Log Pages Log Page: May Support 00:09:11.215 Commands Supported & Effects Log Page: Not Supported 00:09:11.215 Feature Identifiers & Effects Log Page:May Support 00:09:11.215 NVMe-MI Commands & Effects Log Page: May Support 00:09:11.215 Data Area 4 for Telemetry Log: Not Supported 00:09:11.215 Error Log Page Entries Supported: 1 00:09:11.215 Keep Alive: Not Supported 00:09:11.215 00:09:11.215 NVM Command Set Attributes 00:09:11.215 ========================== 00:09:11.215 Submission Queue Entry Size 00:09:11.215 Max: 64 00:09:11.215 Min: 64 00:09:11.215 Completion Queue Entry Size 00:09:11.215 Max: 16 00:09:11.215 Min: 16 00:09:11.215 Number of Namespaces: 256 00:09:11.215 Compare Command: Supported 00:09:11.215 Write Uncorrectable Command: Not Supported 00:09:11.216 Dataset Management Command: Supported 00:09:11.216 Write Zeroes Command: Supported 00:09:11.216 Set Features Save Field: Supported 00:09:11.216 Reservations: Not Supported 00:09:11.216 Timestamp: Supported 00:09:11.216 Copy: Supported 00:09:11.216 Volatile Write Cache: Present 00:09:11.216 Atomic Write Unit (Normal): 1 00:09:11.216 Atomic Write Unit (PFail): 1 00:09:11.216 Atomic Compare & Write Unit: 1 00:09:11.216 Fused Compare & Write: Not Supported 00:09:11.216 Scatter-Gather List 00:09:11.216 SGL Command Set: Supported 00:09:11.216 SGL Keyed: Not Supported 00:09:11.216 SGL Bit Bucket Descriptor: Not Supported 00:09:11.216 SGL Metadata Pointer: Not Supported 00:09:11.216 Oversized SGL: Not Supported 00:09:11.216 SGL Metadata Address: Not Supported 00:09:11.216 SGL Offset: Not Supported 00:09:11.216 Transport SGL Data Block: Not Supported 00:09:11.216 Replay Protected Memory Block: Not Supported 00:09:11.216 00:09:11.216 Firmware Slot Information 00:09:11.216 ========================= 00:09:11.216 Active slot: 1 00:09:11.216 Slot 1 Firmware Revision: 1.0 00:09:11.216 00:09:11.216 00:09:11.216 Commands Supported and Effects 00:09:11.216 ============================== 00:09:11.216 Admin Commands 00:09:11.216 -------------- 00:09:11.216 Delete I/O Submission Queue (00h): Supported 00:09:11.216 Create I/O Submission Queue (01h): Supported 00:09:11.216 Get Log Page (02h): Supported 00:09:11.216 Delete I/O Completion Queue (04h): Supported 00:09:11.216 Create I/O Completion Queue (05h): Supported 00:09:11.216 Identify (06h): Supported 00:09:11.216 Abort (08h): Supported 00:09:11.216 Set Features (09h): Supported 00:09:11.216 Get Features (0Ah): Supported 00:09:11.216 Asynchronous Event Request (0Ch): Supported 00:09:11.216 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:11.216 Directive Send (19h): Supported 00:09:11.216 Directive Receive (1Ah): Supported 00:09:11.216 Virtualization Management (1Ch): Supported 00:09:11.216 Doorbell Buffer Config (7Ch): Supported 00:09:11.216 Format NVM (80h): Supported LBA-Change 00:09:11.216 I/O Commands 00:09:11.216 ------------ 00:09:11.216 Flush (00h): Supported LBA-Change 00:09:11.216 Write (01h): Supported LBA-Change 00:09:11.216 Read (02h): Supported 00:09:11.216 Compare (05h): Supported 00:09:11.216 Write Zeroes (08h): Supported LBA-Change 00:09:11.216 Dataset Management (09h): Supported LBA-Change 00:09:11.216 Unknown (0Ch): Supported 00:09:11.216 Unknown (12h): Supported 00:09:11.216 Copy (19h): Supported LBA-Change 00:09:11.216 Unknown (1Dh): Supported LBA-Change 00:09:11.216 00:09:11.216 Error Log 00:09:11.216 ========= 00:09:11.216 00:09:11.216 Arbitration 00:09:11.216 =========== 00:09:11.216 Arbitration Burst: no limit 00:09:11.216 00:09:11.216 Power Management 00:09:11.216 ================ 00:09:11.216 Number of Power States: 1 00:09:11.216 Current Power State: Power State #0 00:09:11.216 Power State #0: 00:09:11.216 Max Power: 25.00 W 00:09:11.216 Non-Operational State: Operational 00:09:11.216 Entry Latency: 16 microseconds 00:09:11.216 Exit Latency: 4 microseconds 00:09:11.216 Relative Read Throughput: 0 00:09:11.216 Relative Read Latency: 0 00:09:11.216 Relative Write Throughput: 0 00:09:11.216 Relative Write Latency: 0 00:09:11.216 Idle Power[2024-11-19 14:10:09.660245] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:07.0] process 63533 terminated unexpected 00:09:11.216 : Not Reported 00:09:11.216 Active Power: Not Reported 00:09:11.216 Non-Operational Permissive Mode: Not Supported 00:09:11.216 00:09:11.216 Health Information 00:09:11.216 ================== 00:09:11.216 Critical Warnings: 00:09:11.216 Available Spare Space: OK 00:09:11.216 Temperature: OK 00:09:11.216 Device Reliability: OK 00:09:11.216 Read Only: No 00:09:11.216 Volatile Memory Backup: OK 00:09:11.216 Current Temperature: 323 Kelvin (50 Celsius) 00:09:11.216 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:11.216 Available Spare: 0% 00:09:11.216 Available Spare Threshold: 0% 00:09:11.216 Life Percentage Used: 0% 00:09:11.216 Data Units Read: 1640 00:09:11.216 Data Units Written: 746 00:09:11.216 Host Read Commands: 72675 00:09:11.216 Host Write Commands: 35878 00:09:11.216 Controller Busy Time: 0 minutes 00:09:11.216 Power Cycles: 0 00:09:11.216 Power On Hours: 0 hours 00:09:11.216 Unsafe Shutdowns: 0 00:09:11.216 Unrecoverable Media Errors: 0 00:09:11.216 Lifetime Error Log Entries: 0 00:09:11.216 Warning Temperature Time: 0 minutes 00:09:11.216 Critical Temperature Time: 0 minutes 00:09:11.216 00:09:11.216 Number of Queues 00:09:11.216 ================ 00:09:11.216 Number of I/O Submission Queues: 64 00:09:11.216 Number of I/O Completion Queues: 64 00:09:11.216 00:09:11.216 ZNS Specific Controller Data 00:09:11.216 ============================ 00:09:11.216 Zone Append Size Limit: 0 00:09:11.216 00:09:11.216 00:09:11.216 Active Namespaces 00:09:11.216 ================= 00:09:11.216 Namespace ID:1 00:09:11.216 Error Recovery Timeout: Unlimited 00:09:11.216 Command Set Identifier: NVM (00h) 00:09:11.216 Deallocate: Supported 00:09:11.216 Deallocated/Unwritten Error: Supported 00:09:11.216 Deallocated Read Value: All 0x00 00:09:11.216 Deallocate in Write Zeroes: Not Supported 00:09:11.216 Deallocated Guard Field: 0xFFFF 00:09:11.216 Flush: Supported 00:09:11.216 Reservation: Not Supported 00:09:11.216 Metadata Transferred as: Separate Metadata Buffer 00:09:11.216 Namespace Sharing Capabilities: Private 00:09:11.216 Size (in LBAs): 1548666 (5GiB) 00:09:11.216 Capacity (in LBAs): 1548666 (5GiB) 00:09:11.216 Utilization (in LBAs): 1548666 (5GiB) 00:09:11.216 Thin Provisioning: Not Supported 00:09:11.216 Per-NS Atomic Units: No 00:09:11.216 Maximum Single Source Range Length: 128 00:09:11.216 Maximum Copy Length: 128 00:09:11.216 Maximum Source Range Count: 128 00:09:11.216 NGUID/EUI64 Never Reused: No 00:09:11.216 Namespace Write Protected: No 00:09:11.216 Number of LBA Formats: 8 00:09:11.216 Current LBA Format: LBA Format #07 00:09:11.216 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:11.216 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:11.216 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:11.216 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:11.216 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:11.216 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:11.216 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:11.216 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:11.216 00:09:11.216 ===================================================== 00:09:11.216 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:11.216 ===================================================== 00:09:11.216 Controller Capabilities/Features 00:09:11.216 ================================ 00:09:11.216 Vendor ID: 1b36 00:09:11.216 Subsystem Vendor ID: 1af4 00:09:11.216 Serial Number: 12341 00:09:11.216 Model Number: QEMU NVMe Ctrl 00:09:11.216 Firmware Version: 8.0.0 00:09:11.216 Recommended Arb Burst: 6 00:09:11.216 IEEE OUI Identifier: 00 54 52 00:09:11.216 Multi-path I/O 00:09:11.216 May have multiple subsystem ports: No 00:09:11.216 May have multiple controllers: No 00:09:11.216 Associated with SR-IOV VF: No 00:09:11.216 Max Data Transfer Size: 524288 00:09:11.216 Max Number of Namespaces: 256 00:09:11.216 Max Number of I/O Queues: 64 00:09:11.216 NVMe Specification Version (VS): 1.4 00:09:11.216 NVMe Specification Version (Identify): 1.4 00:09:11.217 Maximum Queue Entries: 2048 00:09:11.217 Contiguous Queues Required: Yes 00:09:11.217 Arbitration Mechanisms Supported 00:09:11.217 Weighted Round Robin: Not Supported 00:09:11.217 Vendor Specific: Not Supported 00:09:11.217 Reset Timeout: 7500 ms 00:09:11.217 Doorbell Stride: 4 bytes 00:09:11.217 NVM Subsystem Reset: Not Supported 00:09:11.217 Command Sets Supported 00:09:11.217 NVM Command Set: Supported 00:09:11.217 Boot Partition: Not Supported 00:09:11.217 Memory Page Size Minimum: 4096 bytes 00:09:11.217 Memory Page Size Maximum: 65536 bytes 00:09:11.217 Persistent Memory Region: Not Supported 00:09:11.217 Optional Asynchronous Events Supported 00:09:11.217 Namespace Attribute Notices: Supported 00:09:11.217 Firmware Activation Notices: Not Supported 00:09:11.217 ANA Change Notices: Not Supported 00:09:11.217 PLE Aggregate Log Change Notices: Not Supported 00:09:11.217 LBA Status Info Alert Notices: Not Supported 00:09:11.217 EGE Aggregate Log Change Notices: Not Supported 00:09:11.217 Normal NVM Subsystem Shutdown event: Not Supported 00:09:11.217 Zone Descriptor Change Notices: Not Supported 00:09:11.217 Discovery Log Change Notices: Not Supported 00:09:11.217 Controller Attributes 00:09:11.217 128-bit Host Identifier: Not Supported 00:09:11.217 Non-Operational Permissive Mode: Not Supported 00:09:11.217 NVM Sets: Not Supported 00:09:11.217 Read Recovery Levels: Not Supported 00:09:11.217 Endurance Groups: Not Supported 00:09:11.217 Predictable Latency Mode: Not Supported 00:09:11.217 Traffic Based Keep ALive: Not Supported 00:09:11.217 Namespace Granularity: Not Supported 00:09:11.217 SQ Associations: Not Supported 00:09:11.217 UUID List: Not Supported 00:09:11.217 Multi-Domain Subsystem: Not Supported 00:09:11.217 Fixed Capacity Management: Not Supported 00:09:11.217 Variable Capacity Management: Not Supported 00:09:11.217 Delete Endurance Group: Not Supported 00:09:11.217 Delete NVM Set: Not Supported 00:09:11.217 Extended LBA Formats Supported: Supported 00:09:11.217 Flexible Data Placement Supported: Not Supported 00:09:11.217 00:09:11.217 Controller Memory Buffer Support 00:09:11.217 ================================ 00:09:11.217 Supported: No 00:09:11.217 00:09:11.217 Persistent Memory Region Support 00:09:11.217 ================================ 00:09:11.217 Supported: No 00:09:11.217 00:09:11.217 Admin Command Set Attributes 00:09:11.217 ============================ 00:09:11.217 Security Send/Receive: Not Supported 00:09:11.217 Format NVM: Supported 00:09:11.217 Firmware Activate/Download: Not Supported 00:09:11.217 Namespace Management: Supported 00:09:11.217 Device Self-Test: Not Supported 00:09:11.217 Directives: Supported 00:09:11.217 NVMe-MI: Not Supported 00:09:11.217 Virtualization Management: Not Supported 00:09:11.217 Doorbell Buffer Config: Supported 00:09:11.217 Get LBA Status Capability: Not Supported 00:09:11.217 Command & Feature Lockdown Capability: Not Supported 00:09:11.217 Abort Command Limit: 4 00:09:11.217 Async Event Request Limit: 4 00:09:11.217 Number of Firmware Slots: N/A 00:09:11.217 Firmware Slot 1 Read-Only: N/A 00:09:11.217 Firmware Activation Without Reset: N/A 00:09:11.217 Multiple Update Detection Support: N/A 00:09:11.217 Firmware Update Granularity: No Information Provided 00:09:11.217 Per-Namespace SMART Log: Yes 00:09:11.217 Asymmetric Namespace Access Log Page: Not Supported 00:09:11.217 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:11.217 Command Effects Log Page: Supported 00:09:11.217 Get Log Page Extended Data: Supported 00:09:11.217 Telemetry Log Pages: Not Supported 00:09:11.217 Persistent Event Log Pages: Not Supported 00:09:11.217 Supported Log Pages Log Page: May Support 00:09:11.217 Commands Supported & Effects Log Page: Not Supported 00:09:11.217 Feature Identifiers & Effects Log Page:May Support 00:09:11.217 NVMe-MI Commands & Effects Log Page: May Support 00:09:11.217 Data Area 4 for Telemetry Log: Not Supported 00:09:11.217 Error Log Page Entries Supported: 1 00:09:11.217 Keep Alive: Not Supported 00:09:11.217 00:09:11.217 NVM Command Set Attributes 00:09:11.217 ========================== 00:09:11.217 Submission Queue Entry Size 00:09:11.217 Max: 64 00:09:11.217 Min: 64 00:09:11.217 Completion Queue Entry Size 00:09:11.217 Max: 16 00:09:11.217 Min: 16 00:09:11.217 Number of Namespaces: 256 00:09:11.217 Compare Command: Supported 00:09:11.217 Write Uncorrectable Command: Not Supported 00:09:11.217 Dataset Management Command: Supported 00:09:11.217 Write Zeroes Command: Supported 00:09:11.217 Set Features Save Field: Supported 00:09:11.217 Reservations: Not Supported 00:09:11.217 Timestamp: Supported 00:09:11.217 Copy: Supported 00:09:11.217 Volatile Write Cache: Present 00:09:11.217 Atomic Write Unit (Normal): 1 00:09:11.217 Atomic Write Unit (PFail): 1 00:09:11.217 Atomic Compare & Write Unit: 1 00:09:11.217 Fused Compare & Write: Not Supported 00:09:11.217 Scatter-Gather List 00:09:11.217 SGL Command Set: Supported 00:09:11.217 SGL Keyed: Not Supported 00:09:11.217 SGL Bit Bucket Descriptor: Not Supported 00:09:11.217 SGL Metadata Pointer: Not Supported 00:09:11.217 Oversized SGL: Not Supported 00:09:11.217 SGL Metadata Address: Not Supported 00:09:11.217 SGL Offset: Not Supported 00:09:11.217 Transport SGL Data Block: Not Supported 00:09:11.217 Replay Protected Memory Block: Not Supported 00:09:11.217 00:09:11.217 Firmware Slot Information 00:09:11.217 ========================= 00:09:11.217 Active slot: 1 00:09:11.217 Slot 1 Firmware Revision: 1.0 00:09:11.217 00:09:11.217 00:09:11.217 Commands Supported and Effects 00:09:11.217 ============================== 00:09:11.217 Admin Commands 00:09:11.217 -------------- 00:09:11.217 Delete I/O Submission Queue (00h): Supported 00:09:11.217 Create I/O Submission Queue (01h): Supported 00:09:11.217 Get Log Page (02h): Supported 00:09:11.217 Delete I/O Completion Queue (04h): Supported 00:09:11.217 Create I/O Completion Queue (05h): Supported 00:09:11.217 Identify (06h): Supported 00:09:11.217 Abort (08h): Supported 00:09:11.217 Set Features (09h): Supported 00:09:11.217 Get Features (0Ah): Supported 00:09:11.217 Asynchronous Event Request (0Ch): Supported 00:09:11.217 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:11.217 Directive Send (19h): Supported 00:09:11.217 Directive Receive (1Ah): Supported 00:09:11.217 Virtualization Management (1Ch): Supported 00:09:11.217 Doorbell Buffer Config (7Ch): Supported 00:09:11.217 Format NVM (80h): Supported LBA-Change 00:09:11.217 I/O Commands 00:09:11.217 ------------ 00:09:11.217 Flush (00h): Supported LBA-Change 00:09:11.217 Write (01h): Supported LBA-Change 00:09:11.217 Read (02h): Supported 00:09:11.217 Compare (05h): Supported 00:09:11.217 Write Zeroes (08h): Supported LBA-Change 00:09:11.217 Dataset Management (09h): Supported LBA-Change 00:09:11.217 Unknown (0Ch): Supported 00:09:11.217 Unknown (12h): Supported 00:09:11.217 Copy (19h): Supported LBA-Change 00:09:11.218 Unknown (1Dh): Supported LBA-Change 00:09:11.218 00:09:11.218 Error Log 00:09:11.218 ========= 00:09:11.218 00:09:11.218 Arbitration 00:09:11.218 =========== 00:09:11.218 Arbitration Burst: no limit 00:09:11.218 00:09:11.218 Power Management 00:09:11.218 ================ 00:09:11.218 Number of Power States: 1 00:09:11.218 Current Power State: Power State #0 00:09:11.218 Power State #0: 00:09:11.218 Max Power: 25.00 W 00:09:11.218 Non-Operational State: Operational 00:09:11.218 Entry Latency: 16 microseconds 00:09:11.218 Exit Latency: 4 microseconds 00:09:11.218 Relative Read Throughput: 0 00:09:11.218 Relative Read Latency: 0 00:09:11.218 Relative Write Throughput: 0 00:09:11.218 Relative Write Latency: 0 00:09:11.218 Idle Power: Not Reported 00:09:11.218 Active Power: Not Reported 00:09:11.218 Non-Operational Permissive Mode: Not Supported 00:09:11.218 00:09:11.218 Health Information 00:09:11.218 ================== 00:09:11.218 Critical Warnings: 00:09:11.218 Available Spare Space: OK 00:09:11.218 Temperature: OK 00:09:11.218 Device Reliability: OK 00:09:11.218 Read Only: No 00:09:11.218 Volatile Memory Backup: OK 00:09:11.218 Current Temperature: 323 Kelvin (50 Celsius) 00:09:11.218 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:11.218 Available Spare: 0% 00:09:11.218 Available Spare Threshold: 0% 00:09:11.218 Life Percentage Used: 0% 00:09:11.218 Data Units Read: 1111 00:09:11.218 Data Units Written: 510 00:09:11.218 Host Read Commands: 50622 00:09:11.218 Host Write Commands: 24722 00:09:11.218 Controller Busy Time: 0 minutes 00:09:11.218 Power Cycles: 0 00:09:11.218 Power On Hours: 0 hours 00:09:11.218 Unsafe Shutdowns: 0 00:09:11.218 Unrecoverable Media Errors: 0 00:09:11.218 Lifetime Error Log Entries: 0 00:09:11.218 Warning Temperature Time: 0 minutes 00:09:11.218 Critical Temperature Time: 0 minutes 00:09:11.218 00:09:11.218 Number of Queues 00:09:11.218 ================ 00:09:11.218 Number of I/O Submission Queues: 64 00:09:11.218 Number of I/O Completion Queues: 64 00:09:11.218 00:09:11.218 ZNS Specific Controller Data 00:09:11.218 ============================ 00:09:11.218 Zone Append Size Limit: 0 00:09:11.218 00:09:11.218 00:09:11.218 Active Namespaces 00:09:11.218 ================= 00:09:11.218 Namespace ID:1 00:09:11.218 Error Recovery Timeout: Unlimited 00:09:11.218 Command Set Identifier: [2024-11-19 14:10:09.662171] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:09.0] process 63533 terminated unexpected 00:09:11.218 NVM (00h) 00:09:11.218 Deallocate: Supported 00:09:11.218 Deallocated/Unwritten Error: Supported 00:09:11.218 Deallocated Read Value: All 0x00 00:09:11.218 Deallocate in Write Zeroes: Not Supported 00:09:11.218 Deallocated Guard Field: 0xFFFF 00:09:11.218 Flush: Supported 00:09:11.218 Reservation: Not Supported 00:09:11.218 Namespace Sharing Capabilities: Private 00:09:11.218 Size (in LBAs): 1310720 (5GiB) 00:09:11.218 Capacity (in LBAs): 1310720 (5GiB) 00:09:11.218 Utilization (in LBAs): 1310720 (5GiB) 00:09:11.218 Thin Provisioning: Not Supported 00:09:11.218 Per-NS Atomic Units: No 00:09:11.218 Maximum Single Source Range Length: 128 00:09:11.218 Maximum Copy Length: 128 00:09:11.218 Maximum Source Range Count: 128 00:09:11.218 NGUID/EUI64 Never Reused: No 00:09:11.218 Namespace Write Protected: No 00:09:11.218 Number of LBA Formats: 8 00:09:11.218 Current LBA Format: LBA Format #04 00:09:11.218 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:11.218 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:11.218 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:11.218 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:11.218 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:11.218 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:11.218 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:11.218 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:11.218 00:09:11.218 ===================================================== 00:09:11.218 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:11.218 ===================================================== 00:09:11.218 Controller Capabilities/Features 00:09:11.218 ================================ 00:09:11.218 Vendor ID: 1b36 00:09:11.218 Subsystem Vendor ID: 1af4 00:09:11.218 Serial Number: 12343 00:09:11.218 Model Number: QEMU NVMe Ctrl 00:09:11.218 Firmware Version: 8.0.0 00:09:11.218 Recommended Arb Burst: 6 00:09:11.218 IEEE OUI Identifier: 00 54 52 00:09:11.218 Multi-path I/O 00:09:11.218 May have multiple subsystem ports: No 00:09:11.218 May have multiple controllers: Yes 00:09:11.218 Associated with SR-IOV VF: No 00:09:11.218 Max Data Transfer Size: 524288 00:09:11.218 Max Number of Namespaces: 256 00:09:11.218 Max Number of I/O Queues: 64 00:09:11.218 NVMe Specification Version (VS): 1.4 00:09:11.218 NVMe Specification Version (Identify): 1.4 00:09:11.218 Maximum Queue Entries: 2048 00:09:11.218 Contiguous Queues Required: Yes 00:09:11.218 Arbitration Mechanisms Supported 00:09:11.218 Weighted Round Robin: Not Supported 00:09:11.218 Vendor Specific: Not Supported 00:09:11.218 Reset Timeout: 7500 ms 00:09:11.218 Doorbell Stride: 4 bytes 00:09:11.218 NVM Subsystem Reset: Not Supported 00:09:11.218 Command Sets Supported 00:09:11.218 NVM Command Set: Supported 00:09:11.218 Boot Partition: Not Supported 00:09:11.218 Memory Page Size Minimum: 4096 bytes 00:09:11.218 Memory Page Size Maximum: 65536 bytes 00:09:11.218 Persistent Memory Region: Not Supported 00:09:11.218 Optional Asynchronous Events Supported 00:09:11.218 Namespace Attribute Notices: Supported 00:09:11.218 Firmware Activation Notices: Not Supported 00:09:11.218 ANA Change Notices: Not Supported 00:09:11.218 PLE Aggregate Log Change Notices: Not Supported 00:09:11.218 LBA Status Info Alert Notices: Not Supported 00:09:11.218 EGE Aggregate Log Change Notices: Not Supported 00:09:11.218 Normal NVM Subsystem Shutdown event: Not Supported 00:09:11.218 Zone Descriptor Change Notices: Not Supported 00:09:11.218 Discovery Log Change Notices: Not Supported 00:09:11.218 Controller Attributes 00:09:11.218 128-bit Host Identifier: Not Supported 00:09:11.218 Non-Operational Permissive Mode: Not Supported 00:09:11.218 NVM Sets: Not Supported 00:09:11.218 Read Recovery Levels: Not Supported 00:09:11.218 Endurance Groups: Supported 00:09:11.218 Predictable Latency Mode: Not Supported 00:09:11.218 Traffic Based Keep ALive: Not Supported 00:09:11.218 Namespace Granularity: Not Supported 00:09:11.218 SQ Associations: Not Supported 00:09:11.218 UUID List: Not Supported 00:09:11.219 Multi-Domain Subsystem: Not Supported 00:09:11.219 Fixed Capacity Management: Not Supported 00:09:11.219 Variable Capacity Management: Not Supported 00:09:11.219 Delete Endurance Group: Not Supported 00:09:11.219 Delete NVM Set: Not Supported 00:09:11.219 Extended LBA Formats Supported: Supported 00:09:11.219 Flexible Data Placement Supported: Supported 00:09:11.219 00:09:11.219 Controller Memory Buffer Support 00:09:11.219 ================================ 00:09:11.219 Supported: No 00:09:11.219 00:09:11.219 Persistent Memory Region Support 00:09:11.219 ================================ 00:09:11.219 Supported: No 00:09:11.219 00:09:11.219 Admin Command Set Attributes 00:09:11.219 ============================ 00:09:11.219 Security Send/Receive: Not Supported 00:09:11.219 Format NVM: Supported 00:09:11.219 Firmware Activate/Download: Not Supported 00:09:11.219 Namespace Management: Supported 00:09:11.219 Device Self-Test: Not Supported 00:09:11.219 Directives: Supported 00:09:11.219 NVMe-MI: Not Supported 00:09:11.219 Virtualization Management: Not Supported 00:09:11.219 Doorbell Buffer Config: Supported 00:09:11.219 Get LBA Status Capability: Not Supported 00:09:11.219 Command & Feature Lockdown Capability: Not Supported 00:09:11.219 Abort Command Limit: 4 00:09:11.219 Async Event Request Limit: 4 00:09:11.219 Number of Firmware Slots: N/A 00:09:11.219 Firmware Slot 1 Read-Only: N/A 00:09:11.219 Firmware Activation Without Reset: N/A 00:09:11.219 Multiple Update Detection Support: N/A 00:09:11.219 Firmware Update Granularity: No Information Provided 00:09:11.219 Per-Namespace SMART Log: Yes 00:09:11.219 Asymmetric Namespace Access Log Page: Not Supported 00:09:11.219 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:11.219 Command Effects Log Page: Supported 00:09:11.219 Get Log Page Extended Data: Supported 00:09:11.219 Telemetry Log Pages: Not Supported 00:09:11.219 Persistent Event Log Pages: Not Supported 00:09:11.219 Supported Log Pages Log Page: May Support 00:09:11.219 Commands Supported & Effects Log Page: Not Supported 00:09:11.219 Feature Identifiers & Effects Log Page:May Support 00:09:11.219 NVMe-MI Commands & Effects Log Page: May Support 00:09:11.219 Data Area 4 for Telemetry Log: Not Supported 00:09:11.219 Error Log Page Entries Supported: 1 00:09:11.219 Keep Alive: Not Supported 00:09:11.219 00:09:11.219 NVM Command Set Attributes 00:09:11.219 ========================== 00:09:11.219 Submission Queue Entry Size 00:09:11.219 Max: 64 00:09:11.219 Min: 64 00:09:11.219 Completion Queue Entry Size 00:09:11.219 Max: 16 00:09:11.219 Min: 16 00:09:11.219 Number of Namespaces: 256 00:09:11.219 Compare Command: Supported 00:09:11.219 Write Uncorrectable Command: Not Supported 00:09:11.219 Dataset Management Command: Supported 00:09:11.219 Write Zeroes Command: Supported 00:09:11.219 Set Features Save Field: Supported 00:09:11.219 Reservations: Not Supported 00:09:11.219 Timestamp: Supported 00:09:11.219 Copy: Supported 00:09:11.219 Volatile Write Cache: Present 00:09:11.219 Atomic Write Unit (Normal): 1 00:09:11.219 Atomic Write Unit (PFail): 1 00:09:11.219 Atomic Compare & Write Unit: 1 00:09:11.219 Fused Compare & Write: Not Supported 00:09:11.219 Scatter-Gather List 00:09:11.219 SGL Command Set: Supported 00:09:11.219 SGL Keyed: Not Supported 00:09:11.219 SGL Bit Bucket Descriptor: Not Supported 00:09:11.219 SGL Metadata Pointer: Not Supported 00:09:11.219 Oversized SGL: Not Supported 00:09:11.219 SGL Metadata Address: Not Supported 00:09:11.219 SGL Offset: Not Supported 00:09:11.219 Transport SGL Data Block: Not Supported 00:09:11.219 Replay Protected Memory Block: Not Supported 00:09:11.219 00:09:11.219 Firmware Slot Information 00:09:11.219 ========================= 00:09:11.219 Active slot: 1 00:09:11.219 Slot 1 Firmware Revision: 1.0 00:09:11.219 00:09:11.219 00:09:11.219 Commands Supported and Effects 00:09:11.219 ============================== 00:09:11.219 Admin Commands 00:09:11.219 -------------- 00:09:11.219 Delete I/O Submission Queue (00h): Supported 00:09:11.219 Create I/O Submission Queue (01h): Supported 00:09:11.219 Get Log Page (02h): Supported 00:09:11.219 Delete I/O Completion Queue (04h): Supported 00:09:11.219 Create I/O Completion Queue (05h): Supported 00:09:11.219 Identify (06h): Supported 00:09:11.219 Abort (08h): Supported 00:09:11.219 Set Features (09h): Supported 00:09:11.219 Get Features (0Ah): Supported 00:09:11.219 Asynchronous Event Request (0Ch): Supported 00:09:11.219 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:11.219 Directive Send (19h): Supported 00:09:11.219 Directive Receive (1Ah): Supported 00:09:11.219 Virtualization Management (1Ch): Supported 00:09:11.219 Doorbell Buffer Config (7Ch): Supported 00:09:11.219 Format NVM (80h): Supported LBA-Change 00:09:11.219 I/O Commands 00:09:11.219 ------------ 00:09:11.219 Flush (00h): Supported LBA-Change 00:09:11.219 Write (01h): Supported LBA-Change 00:09:11.219 Read (02h): Supported 00:09:11.219 Compare (05h): Supported 00:09:11.219 Write Zeroes (08h): Supported LBA-Change 00:09:11.219 Dataset Management (09h): Supported LBA-Change 00:09:11.219 Unknown (0Ch): Supported 00:09:11.219 Unknown (12h): Supported 00:09:11.219 Copy (19h): Supported LBA-Change 00:09:11.219 Unknown (1Dh): Supported LBA-Change 00:09:11.219 00:09:11.219 Error Log 00:09:11.219 ========= 00:09:11.219 00:09:11.219 Arbitration 00:09:11.219 =========== 00:09:11.219 Arbitration Burst: no limit 00:09:11.219 00:09:11.219 Power Management 00:09:11.219 ================ 00:09:11.219 Number of Power States: 1 00:09:11.219 Current Power State: Power State #0 00:09:11.219 Power State #0: 00:09:11.219 Max Power: 25.00 W 00:09:11.219 Non-Operational State: Operational 00:09:11.219 Entry Latency: 16 microseconds 00:09:11.219 Exit Latency: 4 microseconds 00:09:11.219 Relative Read Throughput: 0 00:09:11.219 Relative Read Latency: 0 00:09:11.219 Relative Write Throughput: 0 00:09:11.219 Relative Write Latency: 0 00:09:11.219 Idle Power: Not Reported 00:09:11.219 Active Power: Not Reported 00:09:11.219 Non-Operational Permissive Mode: Not Supported 00:09:11.219 00:09:11.219 Health Information 00:09:11.219 ================== 00:09:11.219 Critical Warnings: 00:09:11.219 Available Spare Space: OK 00:09:11.219 Temperature: OK 00:09:11.219 Device Reliability: OK 00:09:11.219 Read Only: No 00:09:11.219 Volatile Memory Backup: OK 00:09:11.219 Current Temperature: 323 Kelvin (50 Celsius) 00:09:11.219 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:11.219 Available Spare: 0% 00:09:11.219 Available Spare Threshold: 0% 00:09:11.219 Life Percentage Used: 0% 00:09:11.219 Data Units Read: 1301 00:09:11.219 Data Units Written: 601 00:09:11.219 Host Read Commands: 52254 00:09:11.219 Host Write Commands: 25503 00:09:11.219 Controller Busy Time: 0 minutes 00:09:11.219 Power Cycles: 0 00:09:11.219 Power On Hours: 0 hours 00:09:11.219 Unsafe Shutdowns: 0 00:09:11.219 Unrecoverable Media Errors: 0 00:09:11.219 Lifetime Error Log Entries: 0 00:09:11.219 Warning Temperature Time: 0 minutes 00:09:11.219 Critical Temperature Time: 0 minutes 00:09:11.219 00:09:11.220 Number of Queues 00:09:11.220 ================ 00:09:11.220 Number of I/O Submission Queues: 64 00:09:11.220 Number of I/O Completion Queues: 64 00:09:11.220 00:09:11.220 ZNS Specific Controller Data 00:09:11.220 ============================ 00:09:11.220 Zone Append Size Limit: 0 00:09:11.220 00:09:11.220 00:09:11.220 Active Namespaces 00:09:11.220 ================= 00:09:11.220 Namespace ID:1 00:09:11.220 Error Recovery Timeout: Unlimited 00:09:11.220 Command Set Identifier: NVM (00h) 00:09:11.220 Deallocate: Supported 00:09:11.220 Deallocated/Unwritten Error: Supported 00:09:11.220 Deallocated Read Value: All 0x00 00:09:11.220 Deallocate in Write Zeroes: Not Supported 00:09:11.220 Deallocated Guard Field: 0xFFFF 00:09:11.220 Flush: Supported 00:09:11.220 Reservation: Not Supported 00:09:11.220 Namespace Sharing Capabilities: Multiple Controllers 00:09:11.220 Size (in LBAs): 262144 (1GiB) 00:09:11.220 Capacity (in LBAs): 262144 (1GiB) 00:09:11.220 Utilization (in LBAs): 262144 (1GiB) 00:09:11.220 Thin Provisioning: Not Supported 00:09:11.220 Per-NS Atomic Units: No 00:09:11.220 Maximum Single Source Range Length: 128 00:09:11.220 Maximum Copy Length: 128 00:09:11.220 Maximum Source Range Count: 128 00:09:11.220 NGUID/EUI64 Never Reused: No 00:09:11.220 Namespace Write Protected: No 00:09:11.220 Endurance group ID: 1 00:09:11.220 Number of LBA Formats: 8 00:09:11.220 Current LBA Format: LBA Format #04 00:09:11.220 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:11.220 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:11.220 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:11.220 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:11.220 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:11.220 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:11.220 LBA Format #06: Data Si[2024-11-19 14:10:09.664548] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:08.0] process 63533 terminated unexpected 00:09:11.220 ze: 4096 Metadata Size: 16 00:09:11.220 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:11.220 00:09:11.220 Get Feature FDP: 00:09:11.220 ================ 00:09:11.220 Enabled: Yes 00:09:11.220 FDP configuration index: 0 00:09:11.220 00:09:11.220 FDP configurations log page 00:09:11.220 =========================== 00:09:11.220 Number of FDP configurations: 1 00:09:11.220 Version: 0 00:09:11.220 Size: 112 00:09:11.220 FDP Configuration Descriptor: 0 00:09:11.220 Descriptor Size: 96 00:09:11.220 Reclaim Group Identifier format: 2 00:09:11.220 FDP Volatile Write Cache: Not Present 00:09:11.220 FDP Configuration: Valid 00:09:11.220 Vendor Specific Size: 0 00:09:11.220 Number of Reclaim Groups: 2 00:09:11.220 Number of Recalim Unit Handles: 8 00:09:11.220 Max Placement Identifiers: 128 00:09:11.220 Number of Namespaces Suppprted: 256 00:09:11.220 Reclaim unit Nominal Size: 6000000 bytes 00:09:11.220 Estimated Reclaim Unit Time Limit: Not Reported 00:09:11.220 RUH Desc #000: RUH Type: Initially Isolated 00:09:11.220 RUH Desc #001: RUH Type: Initially Isolated 00:09:11.220 RUH Desc #002: RUH Type: Initially Isolated 00:09:11.220 RUH Desc #003: RUH Type: Initially Isolated 00:09:11.220 RUH Desc #004: RUH Type: Initially Isolated 00:09:11.220 RUH Desc #005: RUH Type: Initially Isolated 00:09:11.220 RUH Desc #006: RUH Type: Initially Isolated 00:09:11.220 RUH Desc #007: RUH Type: Initially Isolated 00:09:11.220 00:09:11.220 FDP reclaim unit handle usage log page 00:09:11.220 ====================================== 00:09:11.220 Number of Reclaim Unit Handles: 8 00:09:11.220 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:11.220 RUH Usage Desc #001: RUH Attributes: Unused 00:09:11.220 RUH Usage Desc #002: RUH Attributes: Unused 00:09:11.220 RUH Usage Desc #003: RUH Attributes: Unused 00:09:11.220 RUH Usage Desc #004: RUH Attributes: Unused 00:09:11.220 RUH Usage Desc #005: RUH Attributes: Unused 00:09:11.220 RUH Usage Desc #006: RUH Attributes: Unused 00:09:11.220 RUH Usage Desc #007: RUH Attributes: Unused 00:09:11.220 00:09:11.220 FDP statistics log page 00:09:11.220 ======================= 00:09:11.220 Host bytes with metadata written: 400842752 00:09:11.220 Media bytes with metadata written: 400941056 00:09:11.220 Media bytes erased: 0 00:09:11.220 00:09:11.220 FDP events log page 00:09:11.220 =================== 00:09:11.220 Number of FDP events: 0 00:09:11.220 00:09:11.220 ===================================================== 00:09:11.220 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:11.220 ===================================================== 00:09:11.220 Controller Capabilities/Features 00:09:11.220 ================================ 00:09:11.220 Vendor ID: 1b36 00:09:11.220 Subsystem Vendor ID: 1af4 00:09:11.220 Serial Number: 12342 00:09:11.220 Model Number: QEMU NVMe Ctrl 00:09:11.220 Firmware Version: 8.0.0 00:09:11.220 Recommended Arb Burst: 6 00:09:11.220 IEEE OUI Identifier: 00 54 52 00:09:11.220 Multi-path I/O 00:09:11.220 May have multiple subsystem ports: No 00:09:11.220 May have multiple controllers: No 00:09:11.220 Associated with SR-IOV VF: No 00:09:11.220 Max Data Transfer Size: 524288 00:09:11.220 Max Number of Namespaces: 256 00:09:11.220 Max Number of I/O Queues: 64 00:09:11.220 NVMe Specification Version (VS): 1.4 00:09:11.220 NVMe Specification Version (Identify): 1.4 00:09:11.220 Maximum Queue Entries: 2048 00:09:11.220 Contiguous Queues Required: Yes 00:09:11.220 Arbitration Mechanisms Supported 00:09:11.220 Weighted Round Robin: Not Supported 00:09:11.220 Vendor Specific: Not Supported 00:09:11.220 Reset Timeout: 7500 ms 00:09:11.220 Doorbell Stride: 4 bytes 00:09:11.220 NVM Subsystem Reset: Not Supported 00:09:11.220 Command Sets Supported 00:09:11.220 NVM Command Set: Supported 00:09:11.220 Boot Partition: Not Supported 00:09:11.220 Memory Page Size Minimum: 4096 bytes 00:09:11.220 Memory Page Size Maximum: 65536 bytes 00:09:11.220 Persistent Memory Region: Not Supported 00:09:11.220 Optional Asynchronous Events Supported 00:09:11.220 Namespace Attribute Notices: Supported 00:09:11.220 Firmware Activation Notices: Not Supported 00:09:11.220 ANA Change Notices: Not Supported 00:09:11.220 PLE Aggregate Log Change Notices: Not Supported 00:09:11.220 LBA Status Info Alert Notices: Not Supported 00:09:11.220 EGE Aggregate Log Change Notices: Not Supported 00:09:11.220 Normal NVM Subsystem Shutdown event: Not Supported 00:09:11.220 Zone Descriptor Change Notices: Not Supported 00:09:11.220 Discovery Log Change Notices: Not Supported 00:09:11.220 Controller Attributes 00:09:11.220 128-bit Host Identifier: Not Supported 00:09:11.220 Non-Operational Permissive Mode: Not Supported 00:09:11.220 NVM Sets: Not Supported 00:09:11.220 Read Recovery Levels: Not Supported 00:09:11.220 Endurance Groups: Not Supported 00:09:11.220 Predictable Latency Mode: Not Supported 00:09:11.220 Traffic Based Keep ALive: Not Supported 00:09:11.220 Namespace Granularity: Not Supported 00:09:11.220 SQ Associations: Not Supported 00:09:11.220 UUID List: Not Supported 00:09:11.220 Multi-Domain Subsystem: Not Supported 00:09:11.220 Fixed Capacity Management: Not Supported 00:09:11.220 Variable Capacity Management: Not Supported 00:09:11.220 Delete Endurance Group: Not Supported 00:09:11.220 Delete NVM Set: Not Supported 00:09:11.220 Extended LBA Formats Supported: Supported 00:09:11.220 Flexible Data Placement Supported: Not Supported 00:09:11.220 00:09:11.220 Controller Memory Buffer Support 00:09:11.220 ================================ 00:09:11.220 Supported: No 00:09:11.220 00:09:11.220 Persistent Memory Region Support 00:09:11.220 ================================ 00:09:11.220 Supported: No 00:09:11.220 00:09:11.221 Admin Command Set Attributes 00:09:11.221 ============================ 00:09:11.221 Security Send/Receive: Not Supported 00:09:11.221 Format NVM: Supported 00:09:11.221 Firmware Activate/Download: Not Supported 00:09:11.221 Namespace Management: Supported 00:09:11.221 Device Self-Test: Not Supported 00:09:11.221 Directives: Supported 00:09:11.221 NVMe-MI: Not Supported 00:09:11.221 Virtualization Management: Not Supported 00:09:11.221 Doorbell Buffer Config: Supported 00:09:11.221 Get LBA Status Capability: Not Supported 00:09:11.221 Command & Feature Lockdown Capability: Not Supported 00:09:11.221 Abort Command Limit: 4 00:09:11.221 Async Event Request Limit: 4 00:09:11.221 Number of Firmware Slots: N/A 00:09:11.221 Firmware Slot 1 Read-Only: N/A 00:09:11.221 Firmware Activation Without Reset: N/A 00:09:11.221 Multiple Update Detection Support: N/A 00:09:11.221 Firmware Update Granularity: No Information Provided 00:09:11.221 Per-Namespace SMART Log: Yes 00:09:11.221 Asymmetric Namespace Access Log Page: Not Supported 00:09:11.221 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:11.221 Command Effects Log Page: Supported 00:09:11.221 Get Log Page Extended Data: Supported 00:09:11.221 Telemetry Log Pages: Not Supported 00:09:11.221 Persistent Event Log Pages: Not Supported 00:09:11.221 Supported Log Pages Log Page: May Support 00:09:11.221 Commands Supported & Effects Log Page: Not Supported 00:09:11.221 Feature Identifiers & Effects Log Page:May Support 00:09:11.221 NVMe-MI Commands & Effects Log Page: May Support 00:09:11.221 Data Area 4 for Telemetry Log: Not Supported 00:09:11.221 Error Log Page Entries Supported: 1 00:09:11.221 Keep Alive: Not Supported 00:09:11.221 00:09:11.221 NVM Command Set Attributes 00:09:11.221 ========================== 00:09:11.221 Submission Queue Entry Size 00:09:11.221 Max: 64 00:09:11.221 Min: 64 00:09:11.221 Completion Queue Entry Size 00:09:11.221 Max: 16 00:09:11.221 Min: 16 00:09:11.221 Number of Namespaces: 256 00:09:11.221 Compare Command: Supported 00:09:11.221 Write Uncorrectable Command: Not Supported 00:09:11.221 Dataset Management Command: Supported 00:09:11.221 Write Zeroes Command: Supported 00:09:11.221 Set Features Save Field: Supported 00:09:11.221 Reservations: Not Supported 00:09:11.221 Timestamp: Supported 00:09:11.221 Copy: Supported 00:09:11.221 Volatile Write Cache: Present 00:09:11.221 Atomic Write Unit (Normal): 1 00:09:11.221 Atomic Write Unit (PFail): 1 00:09:11.221 Atomic Compare & Write Unit: 1 00:09:11.221 Fused Compare & Write: Not Supported 00:09:11.221 Scatter-Gather List 00:09:11.221 SGL Command Set: Supported 00:09:11.221 SGL Keyed: Not Supported 00:09:11.221 SGL Bit Bucket Descriptor: Not Supported 00:09:11.221 SGL Metadata Pointer: Not Supported 00:09:11.221 Oversized SGL: Not Supported 00:09:11.221 SGL Metadata Address: Not Supported 00:09:11.221 SGL Offset: Not Supported 00:09:11.221 Transport SGL Data Block: Not Supported 00:09:11.221 Replay Protected Memory Block: Not Supported 00:09:11.221 00:09:11.221 Firmware Slot Information 00:09:11.221 ========================= 00:09:11.221 Active slot: 1 00:09:11.221 Slot 1 Firmware Revision: 1.0 00:09:11.221 00:09:11.221 00:09:11.221 Commands Supported and Effects 00:09:11.221 ============================== 00:09:11.221 Admin Commands 00:09:11.221 -------------- 00:09:11.221 Delete I/O Submission Queue (00h): Supported 00:09:11.221 Create I/O Submission Queue (01h): Supported 00:09:11.221 Get Log Page (02h): Supported 00:09:11.221 Delete I/O Completion Queue (04h): Supported 00:09:11.221 Create I/O Completion Queue (05h): Supported 00:09:11.221 Identify (06h): Supported 00:09:11.221 Abort (08h): Supported 00:09:11.221 Set Features (09h): Supported 00:09:11.221 Get Features (0Ah): Supported 00:09:11.221 Asynchronous Event Request (0Ch): Supported 00:09:11.221 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:11.221 Directive Send (19h): Supported 00:09:11.221 Directive Receive (1Ah): Supported 00:09:11.221 Virtualization Management (1Ch): Supported 00:09:11.221 Doorbell Buffer Config (7Ch): Supported 00:09:11.221 Format NVM (80h): Supported LBA-Change 00:09:11.221 I/O Commands 00:09:11.221 ------------ 00:09:11.221 Flush (00h): Supported LBA-Change 00:09:11.221 Write (01h): Supported LBA-Change 00:09:11.221 Read (02h): Supported 00:09:11.221 Compare (05h): Supported 00:09:11.221 Write Zeroes (08h): Supported LBA-Change 00:09:11.221 Dataset Management (09h): Supported LBA-Change 00:09:11.221 Unknown (0Ch): Supported 00:09:11.221 Unknown (12h): Supported 00:09:11.221 Copy (19h): Supported LBA-Change 00:09:11.221 Unknown (1Dh): Supported LBA-Change 00:09:11.221 00:09:11.221 Error Log 00:09:11.221 ========= 00:09:11.221 00:09:11.221 Arbitration 00:09:11.221 =========== 00:09:11.221 Arbitration Burst: no limit 00:09:11.221 00:09:11.221 Power Management 00:09:11.221 ================ 00:09:11.221 Number of Power States: 1 00:09:11.221 Current Power State: Power State #0 00:09:11.221 Power State #0: 00:09:11.221 Max Power: 25.00 W 00:09:11.221 Non-Operational State: Operational 00:09:11.221 Entry Latency: 16 microseconds 00:09:11.221 Exit Latency: 4 microseconds 00:09:11.221 Relative Read Throughput: 0 00:09:11.221 Relative Read Latency: 0 00:09:11.221 Relative Write Throughput: 0 00:09:11.221 Relative Write Latency: 0 00:09:11.221 Idle Power: Not Reported 00:09:11.222 Active Power: Not Reported 00:09:11.222 Non-Operational Permissive Mode: Not Supported 00:09:11.222 00:09:11.222 Health Information 00:09:11.222 ================== 00:09:11.222 Critical Warnings: 00:09:11.222 Available Spare Space: OK 00:09:11.222 Temperature: OK 00:09:11.222 Device Reliability: OK 00:09:11.222 Read Only: No 00:09:11.222 Volatile Memory Backup: OK 00:09:11.222 Current Temperature: 323 Kelvin (50 Celsius) 00:09:11.222 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:11.222 Available Spare: 0% 00:09:11.222 Available Spare Threshold: 0% 00:09:11.222 Life Percentage Used: 0% 00:09:11.222 Data Units Read: 3485 00:09:11.222 Data Units Written: 1596 00:09:11.222 Host Read Commands: 153665 00:09:11.222 Host Write Commands: 74849 00:09:11.222 Controller Busy Time: 0 minutes 00:09:11.222 Power Cycles: 0 00:09:11.222 Power On Hours: 0 hours 00:09:11.222 Unsafe Shutdowns: 0 00:09:11.222 Unrecoverable Media Errors: 0 00:09:11.222 Lifetime Error Log Entries: 0 00:09:11.222 Warning Temperature Time: 0 minutes 00:09:11.222 Critical Temperature Time: 0 minutes 00:09:11.222 00:09:11.222 Number of Queues 00:09:11.222 ================ 00:09:11.222 Number of I/O Submission Queues: 64 00:09:11.222 Number of I/O Completion Queues: 64 00:09:11.222 00:09:11.222 ZNS Specific Controller Data 00:09:11.222 ============================ 00:09:11.222 Zone Append Size Limit: 0 00:09:11.222 00:09:11.222 00:09:11.222 Active Namespaces 00:09:11.222 ================= 00:09:11.222 Namespace ID:1 00:09:11.222 Error Recovery Timeout: Unlimited 00:09:11.222 Command Set Identifier: NVM (00h) 00:09:11.222 Deallocate: Supported 00:09:11.222 Deallocated/Unwritten Error: Supported 00:09:11.222 Deallocated Read Value: All 0x00 00:09:11.222 Deallocate in Write Zeroes: Not Supported 00:09:11.222 Deallocated Guard Field: 0xFFFF 00:09:11.222 Flush: Supported 00:09:11.222 Reservation: Not Supported 00:09:11.222 Namespace Sharing Capabilities: Private 00:09:11.222 Size (in LBAs): 1048576 (4GiB) 00:09:11.222 Capacity (in LBAs): 1048576 (4GiB) 00:09:11.222 Utilization (in LBAs): 1048576 (4GiB) 00:09:11.222 Thin Provisioning: Not Supported 00:09:11.222 Per-NS Atomic Units: No 00:09:11.222 Maximum Single Source Range Length: 128 00:09:11.222 Maximum Copy Length: 128 00:09:11.222 Maximum Source Range Count: 128 00:09:11.222 NGUID/EUI64 Never Reused: No 00:09:11.222 Namespace Write Protected: No 00:09:11.222 Number of LBA Formats: 8 00:09:11.222 Current LBA Format: LBA Format #04 00:09:11.222 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:11.222 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:11.222 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:11.222 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:11.222 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:11.222 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:11.222 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:11.222 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:11.222 00:09:11.222 Namespace ID:2 00:09:11.222 Error Recovery Timeout: Unlimited 00:09:11.222 Command Set Identifier: NVM (00h) 00:09:11.222 Deallocate: Supported 00:09:11.222 Deallocated/Unwritten Error: Supported 00:09:11.222 Deallocated Read Value: All 0x00 00:09:11.222 Deallocate in Write Zeroes: Not Supported 00:09:11.222 Deallocated Guard Field: 0xFFFF 00:09:11.222 Flush: Supported 00:09:11.222 Reservation: Not Supported 00:09:11.222 Namespace Sharing Capabilities: Private 00:09:11.222 Size (in LBAs): 1048576 (4GiB) 00:09:11.222 Capacity (in LBAs): 1048576 (4GiB) 00:09:11.222 Utilization (in LBAs): 1048576 (4GiB) 00:09:11.222 Thin Provisioning: Not Supported 00:09:11.222 Per-NS Atomic Units: No 00:09:11.222 Maximum Single Source Range Length: 128 00:09:11.222 Maximum Copy Length: 128 00:09:11.222 Maximum Source Range Count: 128 00:09:11.222 NGUID/EUI64 Never Reused: No 00:09:11.222 Namespace Write Protected: No 00:09:11.222 Number of LBA Formats: 8 00:09:11.222 Current LBA Format: LBA Format #04 00:09:11.222 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:11.222 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:11.222 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:11.222 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:11.222 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:11.222 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:11.222 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:11.222 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:11.222 00:09:11.222 Namespace ID:3 00:09:11.222 Error Recovery Timeout: Unlimited 00:09:11.222 Command Set Identifier: NVM (00h) 00:09:11.222 Deallocate: Supported 00:09:11.222 Deallocated/Unwritten Error: Supported 00:09:11.222 Deallocated Read Value: All 0x00 00:09:11.222 Deallocate in Write Zeroes: Not Supported 00:09:11.222 Deallocated Guard Field: 0xFFFF 00:09:11.222 Flush: Supported 00:09:11.222 Reservation: Not Supported 00:09:11.222 Namespace Sharing Capabilities: Private 00:09:11.222 Size (in LBAs): 1048576 (4GiB) 00:09:11.222 Capacity (in LBAs): 1048576 (4GiB) 00:09:11.222 Utilization (in LBAs): 1048576 (4GiB) 00:09:11.222 Thin Provisioning: Not Supported 00:09:11.222 Per-NS Atomic Units: No 00:09:11.222 Maximum Single Source Range Length: 128 00:09:11.222 Maximum Copy Length: 128 00:09:11.222 Maximum Source Range Count: 128 00:09:11.222 NGUID/EUI64 Never Reused: No 00:09:11.222 Namespace Write Protected: No 00:09:11.222 Number of LBA Formats: 8 00:09:11.222 Current LBA Format: LBA Format #04 00:09:11.222 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:11.222 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:11.222 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:11.222 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:11.222 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:11.222 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:11.222 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:11.222 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:11.222 00:09:11.222 14:10:09 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:11.222 14:10:09 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:09:11.483 ===================================================== 00:09:11.483 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:11.483 ===================================================== 00:09:11.483 Controller Capabilities/Features 00:09:11.483 ================================ 00:09:11.483 Vendor ID: 1b36 00:09:11.483 Subsystem Vendor ID: 1af4 00:09:11.483 Serial Number: 12340 00:09:11.483 Model Number: QEMU NVMe Ctrl 00:09:11.483 Firmware Version: 8.0.0 00:09:11.483 Recommended Arb Burst: 6 00:09:11.483 IEEE OUI Identifier: 00 54 52 00:09:11.483 Multi-path I/O 00:09:11.483 May have multiple subsystem ports: No 00:09:11.483 May have multiple controllers: No 00:09:11.484 Associated with SR-IOV VF: No 00:09:11.484 Max Data Transfer Size: 524288 00:09:11.484 Max Number of Namespaces: 256 00:09:11.484 Max Number of I/O Queues: 64 00:09:11.484 NVMe Specification Version (VS): 1.4 00:09:11.484 NVMe Specification Version (Identify): 1.4 00:09:11.484 Maximum Queue Entries: 2048 00:09:11.484 Contiguous Queues Required: Yes 00:09:11.484 Arbitration Mechanisms Supported 00:09:11.484 Weighted Round Robin: Not Supported 00:09:11.484 Vendor Specific: Not Supported 00:09:11.484 Reset Timeout: 7500 ms 00:09:11.484 Doorbell Stride: 4 bytes 00:09:11.484 NVM Subsystem Reset: Not Supported 00:09:11.484 Command Sets Supported 00:09:11.484 NVM Command Set: Supported 00:09:11.484 Boot Partition: Not Supported 00:09:11.484 Memory Page Size Minimum: 4096 bytes 00:09:11.484 Memory Page Size Maximum: 65536 bytes 00:09:11.484 Persistent Memory Region: Not Supported 00:09:11.484 Optional Asynchronous Events Supported 00:09:11.484 Namespace Attribute Notices: Supported 00:09:11.484 Firmware Activation Notices: Not Supported 00:09:11.484 ANA Change Notices: Not Supported 00:09:11.484 PLE Aggregate Log Change Notices: Not Supported 00:09:11.484 LBA Status Info Alert Notices: Not Supported 00:09:11.484 EGE Aggregate Log Change Notices: Not Supported 00:09:11.484 Normal NVM Subsystem Shutdown event: Not Supported 00:09:11.484 Zone Descriptor Change Notices: Not Supported 00:09:11.484 Discovery Log Change Notices: Not Supported 00:09:11.484 Controller Attributes 00:09:11.484 128-bit Host Identifier: Not Supported 00:09:11.484 Non-Operational Permissive Mode: Not Supported 00:09:11.484 NVM Sets: Not Supported 00:09:11.484 Read Recovery Levels: Not Supported 00:09:11.484 Endurance Groups: Not Supported 00:09:11.484 Predictable Latency Mode: Not Supported 00:09:11.484 Traffic Based Keep ALive: Not Supported 00:09:11.484 Namespace Granularity: Not Supported 00:09:11.484 SQ Associations: Not Supported 00:09:11.484 UUID List: Not Supported 00:09:11.484 Multi-Domain Subsystem: Not Supported 00:09:11.484 Fixed Capacity Management: Not Supported 00:09:11.484 Variable Capacity Management: Not Supported 00:09:11.484 Delete Endurance Group: Not Supported 00:09:11.484 Delete NVM Set: Not Supported 00:09:11.484 Extended LBA Formats Supported: Supported 00:09:11.484 Flexible Data Placement Supported: Not Supported 00:09:11.484 00:09:11.484 Controller Memory Buffer Support 00:09:11.484 ================================ 00:09:11.484 Supported: No 00:09:11.484 00:09:11.484 Persistent Memory Region Support 00:09:11.484 ================================ 00:09:11.484 Supported: No 00:09:11.484 00:09:11.484 Admin Command Set Attributes 00:09:11.484 ============================ 00:09:11.484 Security Send/Receive: Not Supported 00:09:11.484 Format NVM: Supported 00:09:11.484 Firmware Activate/Download: Not Supported 00:09:11.484 Namespace Management: Supported 00:09:11.484 Device Self-Test: Not Supported 00:09:11.484 Directives: Supported 00:09:11.484 NVMe-MI: Not Supported 00:09:11.484 Virtualization Management: Not Supported 00:09:11.484 Doorbell Buffer Config: Supported 00:09:11.484 Get LBA Status Capability: Not Supported 00:09:11.484 Command & Feature Lockdown Capability: Not Supported 00:09:11.484 Abort Command Limit: 4 00:09:11.484 Async Event Request Limit: 4 00:09:11.484 Number of Firmware Slots: N/A 00:09:11.484 Firmware Slot 1 Read-Only: N/A 00:09:11.484 Firmware Activation Without Reset: N/A 00:09:11.484 Multiple Update Detection Support: N/A 00:09:11.484 Firmware Update Granularity: No Information Provided 00:09:11.484 Per-Namespace SMART Log: Yes 00:09:11.484 Asymmetric Namespace Access Log Page: Not Supported 00:09:11.484 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:11.484 Command Effects Log Page: Supported 00:09:11.484 Get Log Page Extended Data: Supported 00:09:11.484 Telemetry Log Pages: Not Supported 00:09:11.484 Persistent Event Log Pages: Not Supported 00:09:11.484 Supported Log Pages Log Page: May Support 00:09:11.484 Commands Supported & Effects Log Page: Not Supported 00:09:11.484 Feature Identifiers & Effects Log Page:May Support 00:09:11.484 NVMe-MI Commands & Effects Log Page: May Support 00:09:11.484 Data Area 4 for Telemetry Log: Not Supported 00:09:11.484 Error Log Page Entries Supported: 1 00:09:11.484 Keep Alive: Not Supported 00:09:11.484 00:09:11.484 NVM Command Set Attributes 00:09:11.484 ========================== 00:09:11.484 Submission Queue Entry Size 00:09:11.484 Max: 64 00:09:11.484 Min: 64 00:09:11.484 Completion Queue Entry Size 00:09:11.484 Max: 16 00:09:11.484 Min: 16 00:09:11.484 Number of Namespaces: 256 00:09:11.484 Compare Command: Supported 00:09:11.484 Write Uncorrectable Command: Not Supported 00:09:11.484 Dataset Management Command: Supported 00:09:11.484 Write Zeroes Command: Supported 00:09:11.484 Set Features Save Field: Supported 00:09:11.484 Reservations: Not Supported 00:09:11.484 Timestamp: Supported 00:09:11.484 Copy: Supported 00:09:11.484 Volatile Write Cache: Present 00:09:11.484 Atomic Write Unit (Normal): 1 00:09:11.484 Atomic Write Unit (PFail): 1 00:09:11.484 Atomic Compare & Write Unit: 1 00:09:11.484 Fused Compare & Write: Not Supported 00:09:11.484 Scatter-Gather List 00:09:11.484 SGL Command Set: Supported 00:09:11.484 SGL Keyed: Not Supported 00:09:11.484 SGL Bit Bucket Descriptor: Not Supported 00:09:11.484 SGL Metadata Pointer: Not Supported 00:09:11.484 Oversized SGL: Not Supported 00:09:11.484 SGL Metadata Address: Not Supported 00:09:11.484 SGL Offset: Not Supported 00:09:11.484 Transport SGL Data Block: Not Supported 00:09:11.484 Replay Protected Memory Block: Not Supported 00:09:11.484 00:09:11.484 Firmware Slot Information 00:09:11.484 ========================= 00:09:11.484 Active slot: 1 00:09:11.484 Slot 1 Firmware Revision: 1.0 00:09:11.484 00:09:11.484 00:09:11.484 Commands Supported and Effects 00:09:11.484 ============================== 00:09:11.484 Admin Commands 00:09:11.484 -------------- 00:09:11.484 Delete I/O Submission Queue (00h): Supported 00:09:11.484 Create I/O Submission Queue (01h): Supported 00:09:11.484 Get Log Page (02h): Supported 00:09:11.484 Delete I/O Completion Queue (04h): Supported 00:09:11.484 Create I/O Completion Queue (05h): Supported 00:09:11.484 Identify (06h): Supported 00:09:11.484 Abort (08h): Supported 00:09:11.484 Set Features (09h): Supported 00:09:11.484 Get Features (0Ah): Supported 00:09:11.484 Asynchronous Event Request (0Ch): Supported 00:09:11.484 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:11.484 Directive Send (19h): Supported 00:09:11.484 Directive Receive (1Ah): Supported 00:09:11.484 Virtualization Management (1Ch): Supported 00:09:11.484 Doorbell Buffer Config (7Ch): Supported 00:09:11.484 Format NVM (80h): Supported LBA-Change 00:09:11.484 I/O Commands 00:09:11.484 ------------ 00:09:11.484 Flush (00h): Supported LBA-Change 00:09:11.484 Write (01h): Supported LBA-Change 00:09:11.484 Read (02h): Supported 00:09:11.484 Compare (05h): Supported 00:09:11.484 Write Zeroes (08h): Supported LBA-Change 00:09:11.484 Dataset Management (09h): Supported LBA-Change 00:09:11.484 Unknown (0Ch): Supported 00:09:11.484 Unknown (12h): Supported 00:09:11.484 Copy (19h): Supported LBA-Change 00:09:11.484 Unknown (1Dh): Supported LBA-Change 00:09:11.484 00:09:11.484 Error Log 00:09:11.484 ========= 00:09:11.484 00:09:11.484 Arbitration 00:09:11.484 =========== 00:09:11.484 Arbitration Burst: no limit 00:09:11.484 00:09:11.484 Power Management 00:09:11.484 ================ 00:09:11.484 Number of Power States: 1 00:09:11.484 Current Power State: Power State #0 00:09:11.484 Power State #0: 00:09:11.484 Max Power: 25.00 W 00:09:11.484 Non-Operational State: Operational 00:09:11.484 Entry Latency: 16 microseconds 00:09:11.484 Exit Latency: 4 microseconds 00:09:11.484 Relative Read Throughput: 0 00:09:11.484 Relative Read Latency: 0 00:09:11.484 Relative Write Throughput: 0 00:09:11.484 Relative Write Latency: 0 00:09:11.484 Idle Power: Not Reported 00:09:11.484 Active Power: Not Reported 00:09:11.484 Non-Operational Permissive Mode: Not Supported 00:09:11.484 00:09:11.484 Health Information 00:09:11.484 ================== 00:09:11.484 Critical Warnings: 00:09:11.484 Available Spare Space: OK 00:09:11.485 Temperature: OK 00:09:11.485 Device Reliability: OK 00:09:11.485 Read Only: No 00:09:11.485 Volatile Memory Backup: OK 00:09:11.485 Current Temperature: 323 Kelvin (50 Celsius) 00:09:11.485 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:11.485 Available Spare: 0% 00:09:11.485 Available Spare Threshold: 0% 00:09:11.485 Life Percentage Used: 0% 00:09:11.485 Data Units Read: 1640 00:09:11.485 Data Units Written: 746 00:09:11.485 Host Read Commands: 72675 00:09:11.485 Host Write Commands: 35878 00:09:11.485 Controller Busy Time: 0 minutes 00:09:11.485 Power Cycles: 0 00:09:11.485 Power On Hours: 0 hours 00:09:11.485 Unsafe Shutdowns: 0 00:09:11.485 Unrecoverable Media Errors: 0 00:09:11.485 Lifetime Error Log Entries: 0 00:09:11.485 Warning Temperature Time: 0 minutes 00:09:11.485 Critical Temperature Time: 0 minutes 00:09:11.485 00:09:11.485 Number of Queues 00:09:11.485 ================ 00:09:11.485 Number of I/O Submission Queues: 64 00:09:11.485 Number of I/O Completion Queues: 64 00:09:11.485 00:09:11.485 ZNS Specific Controller Data 00:09:11.485 ============================ 00:09:11.485 Zone Append Size Limit: 0 00:09:11.485 00:09:11.485 00:09:11.485 Active Namespaces 00:09:11.485 ================= 00:09:11.485 Namespace ID:1 00:09:11.485 Error Recovery Timeout: Unlimited 00:09:11.485 Command Set Identifier: NVM (00h) 00:09:11.485 Deallocate: Supported 00:09:11.485 Deallocated/Unwritten Error: Supported 00:09:11.485 Deallocated Read Value: All 0x00 00:09:11.485 Deallocate in Write Zeroes: Not Supported 00:09:11.485 Deallocated Guard Field: 0xFFFF 00:09:11.485 Flush: Supported 00:09:11.485 Reservation: Not Supported 00:09:11.485 Metadata Transferred as: Separate Metadata Buffer 00:09:11.485 Namespace Sharing Capabilities: Private 00:09:11.485 Size (in LBAs): 1548666 (5GiB) 00:09:11.485 Capacity (in LBAs): 1548666 (5GiB) 00:09:11.485 Utilization (in LBAs): 1548666 (5GiB) 00:09:11.485 Thin Provisioning: Not Supported 00:09:11.485 Per-NS Atomic Units: No 00:09:11.485 Maximum Single Source Range Length: 128 00:09:11.485 Maximum Copy Length: 128 00:09:11.485 Maximum Source Range Count: 128 00:09:11.485 NGUID/EUI64 Never Reused: No 00:09:11.485 Namespace Write Protected: No 00:09:11.485 Number of LBA Formats: 8 00:09:11.485 Current LBA Format: LBA Format #07 00:09:11.485 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:11.485 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:11.485 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:11.485 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:11.485 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:11.485 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:11.485 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:11.485 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:11.485 00:09:11.485 14:10:09 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:11.485 14:10:09 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' -i 0 00:09:11.746 ===================================================== 00:09:11.746 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:11.746 ===================================================== 00:09:11.746 Controller Capabilities/Features 00:09:11.746 ================================ 00:09:11.746 Vendor ID: 1b36 00:09:11.746 Subsystem Vendor ID: 1af4 00:09:11.746 Serial Number: 12341 00:09:11.746 Model Number: QEMU NVMe Ctrl 00:09:11.746 Firmware Version: 8.0.0 00:09:11.746 Recommended Arb Burst: 6 00:09:11.746 IEEE OUI Identifier: 00 54 52 00:09:11.746 Multi-path I/O 00:09:11.746 May have multiple subsystem ports: No 00:09:11.746 May have multiple controllers: No 00:09:11.746 Associated with SR-IOV VF: No 00:09:11.746 Max Data Transfer Size: 524288 00:09:11.746 Max Number of Namespaces: 256 00:09:11.746 Max Number of I/O Queues: 64 00:09:11.746 NVMe Specification Version (VS): 1.4 00:09:11.746 NVMe Specification Version (Identify): 1.4 00:09:11.746 Maximum Queue Entries: 2048 00:09:11.746 Contiguous Queues Required: Yes 00:09:11.746 Arbitration Mechanisms Supported 00:09:11.746 Weighted Round Robin: Not Supported 00:09:11.746 Vendor Specific: Not Supported 00:09:11.746 Reset Timeout: 7500 ms 00:09:11.746 Doorbell Stride: 4 bytes 00:09:11.746 NVM Subsystem Reset: Not Supported 00:09:11.746 Command Sets Supported 00:09:11.746 NVM Command Set: Supported 00:09:11.746 Boot Partition: Not Supported 00:09:11.746 Memory Page Size Minimum: 4096 bytes 00:09:11.746 Memory Page Size Maximum: 65536 bytes 00:09:11.746 Persistent Memory Region: Not Supported 00:09:11.746 Optional Asynchronous Events Supported 00:09:11.746 Namespace Attribute Notices: Supported 00:09:11.746 Firmware Activation Notices: Not Supported 00:09:11.746 ANA Change Notices: Not Supported 00:09:11.746 PLE Aggregate Log Change Notices: Not Supported 00:09:11.746 LBA Status Info Alert Notices: Not Supported 00:09:11.746 EGE Aggregate Log Change Notices: Not Supported 00:09:11.746 Normal NVM Subsystem Shutdown event: Not Supported 00:09:11.746 Zone Descriptor Change Notices: Not Supported 00:09:11.746 Discovery Log Change Notices: Not Supported 00:09:11.746 Controller Attributes 00:09:11.746 128-bit Host Identifier: Not Supported 00:09:11.746 Non-Operational Permissive Mode: Not Supported 00:09:11.746 NVM Sets: Not Supported 00:09:11.746 Read Recovery Levels: Not Supported 00:09:11.746 Endurance Groups: Not Supported 00:09:11.746 Predictable Latency Mode: Not Supported 00:09:11.746 Traffic Based Keep ALive: Not Supported 00:09:11.746 Namespace Granularity: Not Supported 00:09:11.746 SQ Associations: Not Supported 00:09:11.746 UUID List: Not Supported 00:09:11.746 Multi-Domain Subsystem: Not Supported 00:09:11.746 Fixed Capacity Management: Not Supported 00:09:11.746 Variable Capacity Management: Not Supported 00:09:11.746 Delete Endurance Group: Not Supported 00:09:11.746 Delete NVM Set: Not Supported 00:09:11.746 Extended LBA Formats Supported: Supported 00:09:11.746 Flexible Data Placement Supported: Not Supported 00:09:11.746 00:09:11.746 Controller Memory Buffer Support 00:09:11.746 ================================ 00:09:11.746 Supported: No 00:09:11.746 00:09:11.746 Persistent Memory Region Support 00:09:11.746 ================================ 00:09:11.746 Supported: No 00:09:11.746 00:09:11.746 Admin Command Set Attributes 00:09:11.746 ============================ 00:09:11.746 Security Send/Receive: Not Supported 00:09:11.746 Format NVM: Supported 00:09:11.746 Firmware Activate/Download: Not Supported 00:09:11.746 Namespace Management: Supported 00:09:11.746 Device Self-Test: Not Supported 00:09:11.746 Directives: Supported 00:09:11.746 NVMe-MI: Not Supported 00:09:11.746 Virtualization Management: Not Supported 00:09:11.746 Doorbell Buffer Config: Supported 00:09:11.746 Get LBA Status Capability: Not Supported 00:09:11.746 Command & Feature Lockdown Capability: Not Supported 00:09:11.746 Abort Command Limit: 4 00:09:11.746 Async Event Request Limit: 4 00:09:11.746 Number of Firmware Slots: N/A 00:09:11.746 Firmware Slot 1 Read-Only: N/A 00:09:11.746 Firmware Activation Without Reset: N/A 00:09:11.746 Multiple Update Detection Support: N/A 00:09:11.746 Firmware Update Granularity: No Information Provided 00:09:11.746 Per-Namespace SMART Log: Yes 00:09:11.746 Asymmetric Namespace Access Log Page: Not Supported 00:09:11.746 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:11.746 Command Effects Log Page: Supported 00:09:11.747 Get Log Page Extended Data: Supported 00:09:11.747 Telemetry Log Pages: Not Supported 00:09:11.747 Persistent Event Log Pages: Not Supported 00:09:11.747 Supported Log Pages Log Page: May Support 00:09:11.747 Commands Supported & Effects Log Page: Not Supported 00:09:11.747 Feature Identifiers & Effects Log Page:May Support 00:09:11.747 NVMe-MI Commands & Effects Log Page: May Support 00:09:11.747 Data Area 4 for Telemetry Log: Not Supported 00:09:11.747 Error Log Page Entries Supported: 1 00:09:11.747 Keep Alive: Not Supported 00:09:11.747 00:09:11.747 NVM Command Set Attributes 00:09:11.747 ========================== 00:09:11.747 Submission Queue Entry Size 00:09:11.747 Max: 64 00:09:11.747 Min: 64 00:09:11.747 Completion Queue Entry Size 00:09:11.747 Max: 16 00:09:11.747 Min: 16 00:09:11.747 Number of Namespaces: 256 00:09:11.747 Compare Command: Supported 00:09:11.747 Write Uncorrectable Command: Not Supported 00:09:11.747 Dataset Management Command: Supported 00:09:11.747 Write Zeroes Command: Supported 00:09:11.747 Set Features Save Field: Supported 00:09:11.747 Reservations: Not Supported 00:09:11.747 Timestamp: Supported 00:09:11.747 Copy: Supported 00:09:11.747 Volatile Write Cache: Present 00:09:11.747 Atomic Write Unit (Normal): 1 00:09:11.747 Atomic Write Unit (PFail): 1 00:09:11.747 Atomic Compare & Write Unit: 1 00:09:11.747 Fused Compare & Write: Not Supported 00:09:11.747 Scatter-Gather List 00:09:11.747 SGL Command Set: Supported 00:09:11.747 SGL Keyed: Not Supported 00:09:11.747 SGL Bit Bucket Descriptor: Not Supported 00:09:11.747 SGL Metadata Pointer: Not Supported 00:09:11.747 Oversized SGL: Not Supported 00:09:11.747 SGL Metadata Address: Not Supported 00:09:11.747 SGL Offset: Not Supported 00:09:11.747 Transport SGL Data Block: Not Supported 00:09:11.747 Replay Protected Memory Block: Not Supported 00:09:11.747 00:09:11.747 Firmware Slot Information 00:09:11.747 ========================= 00:09:11.747 Active slot: 1 00:09:11.747 Slot 1 Firmware Revision: 1.0 00:09:11.747 00:09:11.747 00:09:11.747 Commands Supported and Effects 00:09:11.747 ============================== 00:09:11.747 Admin Commands 00:09:11.747 -------------- 00:09:11.747 Delete I/O Submission Queue (00h): Supported 00:09:11.747 Create I/O Submission Queue (01h): Supported 00:09:11.747 Get Log Page (02h): Supported 00:09:11.747 Delete I/O Completion Queue (04h): Supported 00:09:11.747 Create I/O Completion Queue (05h): Supported 00:09:11.747 Identify (06h): Supported 00:09:11.747 Abort (08h): Supported 00:09:11.747 Set Features (09h): Supported 00:09:11.747 Get Features (0Ah): Supported 00:09:11.747 Asynchronous Event Request (0Ch): Supported 00:09:11.747 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:11.747 Directive Send (19h): Supported 00:09:11.747 Directive Receive (1Ah): Supported 00:09:11.747 Virtualization Management (1Ch): Supported 00:09:11.747 Doorbell Buffer Config (7Ch): Supported 00:09:11.747 Format NVM (80h): Supported LBA-Change 00:09:11.747 I/O Commands 00:09:11.747 ------------ 00:09:11.747 Flush (00h): Supported LBA-Change 00:09:11.747 Write (01h): Supported LBA-Change 00:09:11.747 Read (02h): Supported 00:09:11.747 Compare (05h): Supported 00:09:11.747 Write Zeroes (08h): Supported LBA-Change 00:09:11.747 Dataset Management (09h): Supported LBA-Change 00:09:11.747 Unknown (0Ch): Supported 00:09:11.747 Unknown (12h): Supported 00:09:11.747 Copy (19h): Supported LBA-Change 00:09:11.747 Unknown (1Dh): Supported LBA-Change 00:09:11.747 00:09:11.747 Error Log 00:09:11.747 ========= 00:09:11.747 00:09:11.747 Arbitration 00:09:11.747 =========== 00:09:11.747 Arbitration Burst: no limit 00:09:11.747 00:09:11.747 Power Management 00:09:11.747 ================ 00:09:11.747 Number of Power States: 1 00:09:11.747 Current Power State: Power State #0 00:09:11.747 Power State #0: 00:09:11.747 Max Power: 25.00 W 00:09:11.747 Non-Operational State: Operational 00:09:11.747 Entry Latency: 16 microseconds 00:09:11.747 Exit Latency: 4 microseconds 00:09:11.747 Relative Read Throughput: 0 00:09:11.747 Relative Read Latency: 0 00:09:11.747 Relative Write Throughput: 0 00:09:11.747 Relative Write Latency: 0 00:09:11.747 Idle Power: Not Reported 00:09:11.747 Active Power: Not Reported 00:09:11.747 Non-Operational Permissive Mode: Not Supported 00:09:11.747 00:09:11.747 Health Information 00:09:11.747 ================== 00:09:11.747 Critical Warnings: 00:09:11.747 Available Spare Space: OK 00:09:11.747 Temperature: OK 00:09:11.747 Device Reliability: OK 00:09:11.747 Read Only: No 00:09:11.747 Volatile Memory Backup: OK 00:09:11.747 Current Temperature: 323 Kelvin (50 Celsius) 00:09:11.747 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:11.747 Available Spare: 0% 00:09:11.747 Available Spare Threshold: 0% 00:09:11.747 Life Percentage Used: 0% 00:09:11.747 Data Units Read: 1111 00:09:11.747 Data Units Written: 510 00:09:11.747 Host Read Commands: 50622 00:09:11.747 Host Write Commands: 24722 00:09:11.747 Controller Busy Time: 0 minutes 00:09:11.747 Power Cycles: 0 00:09:11.747 Power On Hours: 0 hours 00:09:11.747 Unsafe Shutdowns: 0 00:09:11.747 Unrecoverable Media Errors: 0 00:09:11.747 Lifetime Error Log Entries: 0 00:09:11.747 Warning Temperature Time: 0 minutes 00:09:11.747 Critical Temperature Time: 0 minutes 00:09:11.747 00:09:11.747 Number of Queues 00:09:11.747 ================ 00:09:11.747 Number of I/O Submission Queues: 64 00:09:11.747 Number of I/O Completion Queues: 64 00:09:11.747 00:09:11.747 ZNS Specific Controller Data 00:09:11.747 ============================ 00:09:11.747 Zone Append Size Limit: 0 00:09:11.747 00:09:11.747 00:09:11.747 Active Namespaces 00:09:11.747 ================= 00:09:11.747 Namespace ID:1 00:09:11.748 Error Recovery Timeout: Unlimited 00:09:11.748 Command Set Identifier: NVM (00h) 00:09:11.748 Deallocate: Supported 00:09:11.748 Deallocated/Unwritten Error: Supported 00:09:11.748 Deallocated Read Value: All 0x00 00:09:11.748 Deallocate in Write Zeroes: Not Supported 00:09:11.748 Deallocated Guard Field: 0xFFFF 00:09:11.748 Flush: Supported 00:09:11.748 Reservation: Not Supported 00:09:11.748 Namespace Sharing Capabilities: Private 00:09:11.748 Size (in LBAs): 1310720 (5GiB) 00:09:11.748 Capacity (in LBAs): 1310720 (5GiB) 00:09:11.748 Utilization (in LBAs): 1310720 (5GiB) 00:09:11.748 Thin Provisioning: Not Supported 00:09:11.748 Per-NS Atomic Units: No 00:09:11.748 Maximum Single Source Range Length: 128 00:09:11.748 Maximum Copy Length: 128 00:09:11.748 Maximum Source Range Count: 128 00:09:11.748 NGUID/EUI64 Never Reused: No 00:09:11.748 Namespace Write Protected: No 00:09:11.748 Number of LBA Formats: 8 00:09:11.748 Current LBA Format: LBA Format #04 00:09:11.748 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:11.748 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:11.748 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:11.748 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:11.748 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:11.748 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:11.748 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:11.748 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:11.748 00:09:11.748 14:10:10 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:11.748 14:10:10 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' -i 0 00:09:12.010 ===================================================== 00:09:12.010 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:12.010 ===================================================== 00:09:12.010 Controller Capabilities/Features 00:09:12.010 ================================ 00:09:12.010 Vendor ID: 1b36 00:09:12.010 Subsystem Vendor ID: 1af4 00:09:12.010 Serial Number: 12342 00:09:12.010 Model Number: QEMU NVMe Ctrl 00:09:12.010 Firmware Version: 8.0.0 00:09:12.010 Recommended Arb Burst: 6 00:09:12.010 IEEE OUI Identifier: 00 54 52 00:09:12.010 Multi-path I/O 00:09:12.010 May have multiple subsystem ports: No 00:09:12.010 May have multiple controllers: No 00:09:12.010 Associated with SR-IOV VF: No 00:09:12.010 Max Data Transfer Size: 524288 00:09:12.010 Max Number of Namespaces: 256 00:09:12.010 Max Number of I/O Queues: 64 00:09:12.010 NVMe Specification Version (VS): 1.4 00:09:12.010 NVMe Specification Version (Identify): 1.4 00:09:12.010 Maximum Queue Entries: 2048 00:09:12.010 Contiguous Queues Required: Yes 00:09:12.010 Arbitration Mechanisms Supported 00:09:12.010 Weighted Round Robin: Not Supported 00:09:12.010 Vendor Specific: Not Supported 00:09:12.010 Reset Timeout: 7500 ms 00:09:12.010 Doorbell Stride: 4 bytes 00:09:12.010 NVM Subsystem Reset: Not Supported 00:09:12.010 Command Sets Supported 00:09:12.010 NVM Command Set: Supported 00:09:12.010 Boot Partition: Not Supported 00:09:12.010 Memory Page Size Minimum: 4096 bytes 00:09:12.010 Memory Page Size Maximum: 65536 bytes 00:09:12.011 Persistent Memory Region: Not Supported 00:09:12.011 Optional Asynchronous Events Supported 00:09:12.011 Namespace Attribute Notices: Supported 00:09:12.011 Firmware Activation Notices: Not Supported 00:09:12.011 ANA Change Notices: Not Supported 00:09:12.011 PLE Aggregate Log Change Notices: Not Supported 00:09:12.011 LBA Status Info Alert Notices: Not Supported 00:09:12.011 EGE Aggregate Log Change Notices: Not Supported 00:09:12.011 Normal NVM Subsystem Shutdown event: Not Supported 00:09:12.011 Zone Descriptor Change Notices: Not Supported 00:09:12.011 Discovery Log Change Notices: Not Supported 00:09:12.011 Controller Attributes 00:09:12.011 128-bit Host Identifier: Not Supported 00:09:12.011 Non-Operational Permissive Mode: Not Supported 00:09:12.011 NVM Sets: Not Supported 00:09:12.011 Read Recovery Levels: Not Supported 00:09:12.011 Endurance Groups: Not Supported 00:09:12.011 Predictable Latency Mode: Not Supported 00:09:12.011 Traffic Based Keep ALive: Not Supported 00:09:12.011 Namespace Granularity: Not Supported 00:09:12.011 SQ Associations: Not Supported 00:09:12.011 UUID List: Not Supported 00:09:12.011 Multi-Domain Subsystem: Not Supported 00:09:12.011 Fixed Capacity Management: Not Supported 00:09:12.011 Variable Capacity Management: Not Supported 00:09:12.011 Delete Endurance Group: Not Supported 00:09:12.011 Delete NVM Set: Not Supported 00:09:12.011 Extended LBA Formats Supported: Supported 00:09:12.011 Flexible Data Placement Supported: Not Supported 00:09:12.011 00:09:12.011 Controller Memory Buffer Support 00:09:12.011 ================================ 00:09:12.011 Supported: No 00:09:12.011 00:09:12.011 Persistent Memory Region Support 00:09:12.011 ================================ 00:09:12.011 Supported: No 00:09:12.011 00:09:12.011 Admin Command Set Attributes 00:09:12.011 ============================ 00:09:12.011 Security Send/Receive: Not Supported 00:09:12.011 Format NVM: Supported 00:09:12.011 Firmware Activate/Download: Not Supported 00:09:12.011 Namespace Management: Supported 00:09:12.011 Device Self-Test: Not Supported 00:09:12.011 Directives: Supported 00:09:12.011 NVMe-MI: Not Supported 00:09:12.011 Virtualization Management: Not Supported 00:09:12.011 Doorbell Buffer Config: Supported 00:09:12.011 Get LBA Status Capability: Not Supported 00:09:12.011 Command & Feature Lockdown Capability: Not Supported 00:09:12.011 Abort Command Limit: 4 00:09:12.011 Async Event Request Limit: 4 00:09:12.011 Number of Firmware Slots: N/A 00:09:12.011 Firmware Slot 1 Read-Only: N/A 00:09:12.011 Firmware Activation Without Reset: N/A 00:09:12.011 Multiple Update Detection Support: N/A 00:09:12.011 Firmware Update Granularity: No Information Provided 00:09:12.011 Per-Namespace SMART Log: Yes 00:09:12.011 Asymmetric Namespace Access Log Page: Not Supported 00:09:12.011 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:12.011 Command Effects Log Page: Supported 00:09:12.011 Get Log Page Extended Data: Supported 00:09:12.011 Telemetry Log Pages: Not Supported 00:09:12.011 Persistent Event Log Pages: Not Supported 00:09:12.011 Supported Log Pages Log Page: May Support 00:09:12.011 Commands Supported & Effects Log Page: Not Supported 00:09:12.011 Feature Identifiers & Effects Log Page:May Support 00:09:12.011 NVMe-MI Commands & Effects Log Page: May Support 00:09:12.011 Data Area 4 for Telemetry Log: Not Supported 00:09:12.011 Error Log Page Entries Supported: 1 00:09:12.011 Keep Alive: Not Supported 00:09:12.011 00:09:12.011 NVM Command Set Attributes 00:09:12.011 ========================== 00:09:12.011 Submission Queue Entry Size 00:09:12.011 Max: 64 00:09:12.011 Min: 64 00:09:12.011 Completion Queue Entry Size 00:09:12.011 Max: 16 00:09:12.011 Min: 16 00:09:12.011 Number of Namespaces: 256 00:09:12.011 Compare Command: Supported 00:09:12.011 Write Uncorrectable Command: Not Supported 00:09:12.011 Dataset Management Command: Supported 00:09:12.011 Write Zeroes Command: Supported 00:09:12.011 Set Features Save Field: Supported 00:09:12.011 Reservations: Not Supported 00:09:12.011 Timestamp: Supported 00:09:12.011 Copy: Supported 00:09:12.011 Volatile Write Cache: Present 00:09:12.011 Atomic Write Unit (Normal): 1 00:09:12.011 Atomic Write Unit (PFail): 1 00:09:12.011 Atomic Compare & Write Unit: 1 00:09:12.011 Fused Compare & Write: Not Supported 00:09:12.011 Scatter-Gather List 00:09:12.011 SGL Command Set: Supported 00:09:12.011 SGL Keyed: Not Supported 00:09:12.011 SGL Bit Bucket Descriptor: Not Supported 00:09:12.011 SGL Metadata Pointer: Not Supported 00:09:12.011 Oversized SGL: Not Supported 00:09:12.011 SGL Metadata Address: Not Supported 00:09:12.011 SGL Offset: Not Supported 00:09:12.011 Transport SGL Data Block: Not Supported 00:09:12.011 Replay Protected Memory Block: Not Supported 00:09:12.011 00:09:12.011 Firmware Slot Information 00:09:12.011 ========================= 00:09:12.011 Active slot: 1 00:09:12.011 Slot 1 Firmware Revision: 1.0 00:09:12.011 00:09:12.011 00:09:12.011 Commands Supported and Effects 00:09:12.011 ============================== 00:09:12.011 Admin Commands 00:09:12.011 -------------- 00:09:12.011 Delete I/O Submission Queue (00h): Supported 00:09:12.011 Create I/O Submission Queue (01h): Supported 00:09:12.011 Get Log Page (02h): Supported 00:09:12.011 Delete I/O Completion Queue (04h): Supported 00:09:12.011 Create I/O Completion Queue (05h): Supported 00:09:12.011 Identify (06h): Supported 00:09:12.011 Abort (08h): Supported 00:09:12.011 Set Features (09h): Supported 00:09:12.011 Get Features (0Ah): Supported 00:09:12.011 Asynchronous Event Request (0Ch): Supported 00:09:12.011 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:12.011 Directive Send (19h): Supported 00:09:12.011 Directive Receive (1Ah): Supported 00:09:12.011 Virtualization Management (1Ch): Supported 00:09:12.011 Doorbell Buffer Config (7Ch): Supported 00:09:12.011 Format NVM (80h): Supported LBA-Change 00:09:12.011 I/O Commands 00:09:12.011 ------------ 00:09:12.011 Flush (00h): Supported LBA-Change 00:09:12.011 Write (01h): Supported LBA-Change 00:09:12.011 Read (02h): Supported 00:09:12.011 Compare (05h): Supported 00:09:12.011 Write Zeroes (08h): Supported LBA-Change 00:09:12.011 Dataset Management (09h): Supported LBA-Change 00:09:12.011 Unknown (0Ch): Supported 00:09:12.011 Unknown (12h): Supported 00:09:12.011 Copy (19h): Supported LBA-Change 00:09:12.011 Unknown (1Dh): Supported LBA-Change 00:09:12.011 00:09:12.012 Error Log 00:09:12.012 ========= 00:09:12.012 00:09:12.012 Arbitration 00:09:12.012 =========== 00:09:12.012 Arbitration Burst: no limit 00:09:12.012 00:09:12.012 Power Management 00:09:12.012 ================ 00:09:12.012 Number of Power States: 1 00:09:12.012 Current Power State: Power State #0 00:09:12.012 Power State #0: 00:09:12.012 Max Power: 25.00 W 00:09:12.012 Non-Operational State: Operational 00:09:12.012 Entry Latency: 16 microseconds 00:09:12.012 Exit Latency: 4 microseconds 00:09:12.012 Relative Read Throughput: 0 00:09:12.012 Relative Read Latency: 0 00:09:12.012 Relative Write Throughput: 0 00:09:12.012 Relative Write Latency: 0 00:09:12.012 Idle Power: Not Reported 00:09:12.012 Active Power: Not Reported 00:09:12.012 Non-Operational Permissive Mode: Not Supported 00:09:12.012 00:09:12.012 Health Information 00:09:12.012 ================== 00:09:12.012 Critical Warnings: 00:09:12.012 Available Spare Space: OK 00:09:12.012 Temperature: OK 00:09:12.012 Device Reliability: OK 00:09:12.012 Read Only: No 00:09:12.012 Volatile Memory Backup: OK 00:09:12.012 Current Temperature: 323 Kelvin (50 Celsius) 00:09:12.012 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:12.012 Available Spare: 0% 00:09:12.012 Available Spare Threshold: 0% 00:09:12.012 Life Percentage Used: 0% 00:09:12.012 Data Units Read: 3485 00:09:12.012 Data Units Written: 1596 00:09:12.012 Host Read Commands: 153665 00:09:12.012 Host Write Commands: 74849 00:09:12.012 Controller Busy Time: 0 minutes 00:09:12.012 Power Cycles: 0 00:09:12.012 Power On Hours: 0 hours 00:09:12.012 Unsafe Shutdowns: 0 00:09:12.012 Unrecoverable Media Errors: 0 00:09:12.012 Lifetime Error Log Entries: 0 00:09:12.012 Warning Temperature Time: 0 minutes 00:09:12.012 Critical Temperature Time: 0 minutes 00:09:12.012 00:09:12.012 Number of Queues 00:09:12.012 ================ 00:09:12.012 Number of I/O Submission Queues: 64 00:09:12.012 Number of I/O Completion Queues: 64 00:09:12.012 00:09:12.012 ZNS Specific Controller Data 00:09:12.012 ============================ 00:09:12.012 Zone Append Size Limit: 0 00:09:12.012 00:09:12.012 00:09:12.012 Active Namespaces 00:09:12.012 ================= 00:09:12.012 Namespace ID:1 00:09:12.012 Error Recovery Timeout: Unlimited 00:09:12.012 Command Set Identifier: NVM (00h) 00:09:12.012 Deallocate: Supported 00:09:12.012 Deallocated/Unwritten Error: Supported 00:09:12.012 Deallocated Read Value: All 0x00 00:09:12.012 Deallocate in Write Zeroes: Not Supported 00:09:12.012 Deallocated Guard Field: 0xFFFF 00:09:12.012 Flush: Supported 00:09:12.012 Reservation: Not Supported 00:09:12.012 Namespace Sharing Capabilities: Private 00:09:12.012 Size (in LBAs): 1048576 (4GiB) 00:09:12.012 Capacity (in LBAs): 1048576 (4GiB) 00:09:12.012 Utilization (in LBAs): 1048576 (4GiB) 00:09:12.012 Thin Provisioning: Not Supported 00:09:12.012 Per-NS Atomic Units: No 00:09:12.012 Maximum Single Source Range Length: 128 00:09:12.012 Maximum Copy Length: 128 00:09:12.012 Maximum Source Range Count: 128 00:09:12.012 NGUID/EUI64 Never Reused: No 00:09:12.012 Namespace Write Protected: No 00:09:12.012 Number of LBA Formats: 8 00:09:12.012 Current LBA Format: LBA Format #04 00:09:12.012 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:12.012 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:12.012 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:12.012 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:12.012 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:12.012 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:12.012 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:12.012 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:12.012 00:09:12.012 Namespace ID:2 00:09:12.012 Error Recovery Timeout: Unlimited 00:09:12.012 Command Set Identifier: NVM (00h) 00:09:12.012 Deallocate: Supported 00:09:12.012 Deallocated/Unwritten Error: Supported 00:09:12.012 Deallocated Read Value: All 0x00 00:09:12.012 Deallocate in Write Zeroes: Not Supported 00:09:12.012 Deallocated Guard Field: 0xFFFF 00:09:12.012 Flush: Supported 00:09:12.012 Reservation: Not Supported 00:09:12.012 Namespace Sharing Capabilities: Private 00:09:12.012 Size (in LBAs): 1048576 (4GiB) 00:09:12.012 Capacity (in LBAs): 1048576 (4GiB) 00:09:12.012 Utilization (in LBAs): 1048576 (4GiB) 00:09:12.012 Thin Provisioning: Not Supported 00:09:12.012 Per-NS Atomic Units: No 00:09:12.012 Maximum Single Source Range Length: 128 00:09:12.012 Maximum Copy Length: 128 00:09:12.012 Maximum Source Range Count: 128 00:09:12.012 NGUID/EUI64 Never Reused: No 00:09:12.012 Namespace Write Protected: No 00:09:12.012 Number of LBA Formats: 8 00:09:12.012 Current LBA Format: LBA Format #04 00:09:12.012 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:12.012 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:12.012 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:12.012 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:12.012 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:12.012 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:12.012 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:12.012 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:12.012 00:09:12.012 Namespace ID:3 00:09:12.012 Error Recovery Timeout: Unlimited 00:09:12.012 Command Set Identifier: NVM (00h) 00:09:12.012 Deallocate: Supported 00:09:12.012 Deallocated/Unwritten Error: Supported 00:09:12.012 Deallocated Read Value: All 0x00 00:09:12.012 Deallocate in Write Zeroes: Not Supported 00:09:12.012 Deallocated Guard Field: 0xFFFF 00:09:12.012 Flush: Supported 00:09:12.012 Reservation: Not Supported 00:09:12.012 Namespace Sharing Capabilities: Private 00:09:12.012 Size (in LBAs): 1048576 (4GiB) 00:09:12.012 Capacity (in LBAs): 1048576 (4GiB) 00:09:12.012 Utilization (in LBAs): 1048576 (4GiB) 00:09:12.012 Thin Provisioning: Not Supported 00:09:12.012 Per-NS Atomic Units: No 00:09:12.012 Maximum Single Source Range Length: 128 00:09:12.012 Maximum Copy Length: 128 00:09:12.012 Maximum Source Range Count: 128 00:09:12.012 NGUID/EUI64 Never Reused: No 00:09:12.012 Namespace Write Protected: No 00:09:12.012 Number of LBA Formats: 8 00:09:12.012 Current LBA Format: LBA Format #04 00:09:12.012 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:12.012 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:12.012 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:12.012 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:12.012 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:12.012 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:12.012 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:12.012 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:12.012 00:09:12.012 14:10:10 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:12.012 14:10:10 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' -i 0 00:09:12.012 ===================================================== 00:09:12.012 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:12.013 ===================================================== 00:09:12.013 Controller Capabilities/Features 00:09:12.013 ================================ 00:09:12.013 Vendor ID: 1b36 00:09:12.013 Subsystem Vendor ID: 1af4 00:09:12.013 Serial Number: 12343 00:09:12.013 Model Number: QEMU NVMe Ctrl 00:09:12.013 Firmware Version: 8.0.0 00:09:12.013 Recommended Arb Burst: 6 00:09:12.013 IEEE OUI Identifier: 00 54 52 00:09:12.013 Multi-path I/O 00:09:12.013 May have multiple subsystem ports: No 00:09:12.013 May have multiple controllers: Yes 00:09:12.013 Associated with SR-IOV VF: No 00:09:12.013 Max Data Transfer Size: 524288 00:09:12.013 Max Number of Namespaces: 256 00:09:12.013 Max Number of I/O Queues: 64 00:09:12.013 NVMe Specification Version (VS): 1.4 00:09:12.013 NVMe Specification Version (Identify): 1.4 00:09:12.013 Maximum Queue Entries: 2048 00:09:12.013 Contiguous Queues Required: Yes 00:09:12.013 Arbitration Mechanisms Supported 00:09:12.013 Weighted Round Robin: Not Supported 00:09:12.013 Vendor Specific: Not Supported 00:09:12.013 Reset Timeout: 7500 ms 00:09:12.013 Doorbell Stride: 4 bytes 00:09:12.013 NVM Subsystem Reset: Not Supported 00:09:12.013 Command Sets Supported 00:09:12.013 NVM Command Set: Supported 00:09:12.013 Boot Partition: Not Supported 00:09:12.013 Memory Page Size Minimum: 4096 bytes 00:09:12.013 Memory Page Size Maximum: 65536 bytes 00:09:12.013 Persistent Memory Region: Not Supported 00:09:12.013 Optional Asynchronous Events Supported 00:09:12.013 Namespace Attribute Notices: Supported 00:09:12.013 Firmware Activation Notices: Not Supported 00:09:12.013 ANA Change Notices: Not Supported 00:09:12.013 PLE Aggregate Log Change Notices: Not Supported 00:09:12.013 LBA Status Info Alert Notices: Not Supported 00:09:12.013 EGE Aggregate Log Change Notices: Not Supported 00:09:12.013 Normal NVM Subsystem Shutdown event: Not Supported 00:09:12.013 Zone Descriptor Change Notices: Not Supported 00:09:12.013 Discovery Log Change Notices: Not Supported 00:09:12.013 Controller Attributes 00:09:12.013 128-bit Host Identifier: Not Supported 00:09:12.013 Non-Operational Permissive Mode: Not Supported 00:09:12.013 NVM Sets: Not Supported 00:09:12.013 Read Recovery Levels: Not Supported 00:09:12.013 Endurance Groups: Supported 00:09:12.013 Predictable Latency Mode: Not Supported 00:09:12.013 Traffic Based Keep ALive: Not Supported 00:09:12.013 Namespace Granularity: Not Supported 00:09:12.013 SQ Associations: Not Supported 00:09:12.013 UUID List: Not Supported 00:09:12.013 Multi-Domain Subsystem: Not Supported 00:09:12.013 Fixed Capacity Management: Not Supported 00:09:12.013 Variable Capacity Management: Not Supported 00:09:12.013 Delete Endurance Group: Not Supported 00:09:12.013 Delete NVM Set: Not Supported 00:09:12.013 Extended LBA Formats Supported: Supported 00:09:12.013 Flexible Data Placement Supported: Supported 00:09:12.013 00:09:12.013 Controller Memory Buffer Support 00:09:12.013 ================================ 00:09:12.013 Supported: No 00:09:12.013 00:09:12.013 Persistent Memory Region Support 00:09:12.013 ================================ 00:09:12.013 Supported: No 00:09:12.013 00:09:12.013 Admin Command Set Attributes 00:09:12.013 ============================ 00:09:12.013 Security Send/Receive: Not Supported 00:09:12.013 Format NVM: Supported 00:09:12.013 Firmware Activate/Download: Not Supported 00:09:12.013 Namespace Management: Supported 00:09:12.013 Device Self-Test: Not Supported 00:09:12.013 Directives: Supported 00:09:12.013 NVMe-MI: Not Supported 00:09:12.013 Virtualization Management: Not Supported 00:09:12.013 Doorbell Buffer Config: Supported 00:09:12.013 Get LBA Status Capability: Not Supported 00:09:12.013 Command & Feature Lockdown Capability: Not Supported 00:09:12.013 Abort Command Limit: 4 00:09:12.013 Async Event Request Limit: 4 00:09:12.013 Number of Firmware Slots: N/A 00:09:12.013 Firmware Slot 1 Read-Only: N/A 00:09:12.013 Firmware Activation Without Reset: N/A 00:09:12.013 Multiple Update Detection Support: N/A 00:09:12.013 Firmware Update Granularity: No Information Provided 00:09:12.013 Per-Namespace SMART Log: Yes 00:09:12.013 Asymmetric Namespace Access Log Page: Not Supported 00:09:12.013 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:12.013 Command Effects Log Page: Supported 00:09:12.013 Get Log Page Extended Data: Supported 00:09:12.013 Telemetry Log Pages: Not Supported 00:09:12.013 Persistent Event Log Pages: Not Supported 00:09:12.013 Supported Log Pages Log Page: May Support 00:09:12.013 Commands Supported & Effects Log Page: Not Supported 00:09:12.013 Feature Identifiers & Effects Log Page:May Support 00:09:12.013 NVMe-MI Commands & Effects Log Page: May Support 00:09:12.013 Data Area 4 for Telemetry Log: Not Supported 00:09:12.013 Error Log Page Entries Supported: 1 00:09:12.013 Keep Alive: Not Supported 00:09:12.013 00:09:12.013 NVM Command Set Attributes 00:09:12.013 ========================== 00:09:12.013 Submission Queue Entry Size 00:09:12.013 Max: 64 00:09:12.013 Min: 64 00:09:12.013 Completion Queue Entry Size 00:09:12.013 Max: 16 00:09:12.013 Min: 16 00:09:12.013 Number of Namespaces: 256 00:09:12.013 Compare Command: Supported 00:09:12.013 Write Uncorrectable Command: Not Supported 00:09:12.013 Dataset Management Command: Supported 00:09:12.013 Write Zeroes Command: Supported 00:09:12.013 Set Features Save Field: Supported 00:09:12.013 Reservations: Not Supported 00:09:12.013 Timestamp: Supported 00:09:12.013 Copy: Supported 00:09:12.013 Volatile Write Cache: Present 00:09:12.013 Atomic Write Unit (Normal): 1 00:09:12.013 Atomic Write Unit (PFail): 1 00:09:12.013 Atomic Compare & Write Unit: 1 00:09:12.013 Fused Compare & Write: Not Supported 00:09:12.013 Scatter-Gather List 00:09:12.013 SGL Command Set: Supported 00:09:12.013 SGL Keyed: Not Supported 00:09:12.013 SGL Bit Bucket Descriptor: Not Supported 00:09:12.013 SGL Metadata Pointer: Not Supported 00:09:12.013 Oversized SGL: Not Supported 00:09:12.013 SGL Metadata Address: Not Supported 00:09:12.013 SGL Offset: Not Supported 00:09:12.013 Transport SGL Data Block: Not Supported 00:09:12.013 Replay Protected Memory Block: Not Supported 00:09:12.013 00:09:12.013 Firmware Slot Information 00:09:12.013 ========================= 00:09:12.013 Active slot: 1 00:09:12.013 Slot 1 Firmware Revision: 1.0 00:09:12.013 00:09:12.013 00:09:12.013 Commands Supported and Effects 00:09:12.013 ============================== 00:09:12.013 Admin Commands 00:09:12.013 -------------- 00:09:12.013 Delete I/O Submission Queue (00h): Supported 00:09:12.013 Create I/O Submission Queue (01h): Supported 00:09:12.013 Get Log Page (02h): Supported 00:09:12.013 Delete I/O Completion Queue (04h): Supported 00:09:12.013 Create I/O Completion Queue (05h): Supported 00:09:12.013 Identify (06h): Supported 00:09:12.013 Abort (08h): Supported 00:09:12.013 Set Features (09h): Supported 00:09:12.013 Get Features (0Ah): Supported 00:09:12.013 Asynchronous Event Request (0Ch): Supported 00:09:12.014 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:12.014 Directive Send (19h): Supported 00:09:12.014 Directive Receive (1Ah): Supported 00:09:12.014 Virtualization Management (1Ch): Supported 00:09:12.014 Doorbell Buffer Config (7Ch): Supported 00:09:12.014 Format NVM (80h): Supported LBA-Change 00:09:12.014 I/O Commands 00:09:12.014 ------------ 00:09:12.014 Flush (00h): Supported LBA-Change 00:09:12.014 Write (01h): Supported LBA-Change 00:09:12.014 Read (02h): Supported 00:09:12.014 Compare (05h): Supported 00:09:12.014 Write Zeroes (08h): Supported LBA-Change 00:09:12.014 Dataset Management (09h): Supported LBA-Change 00:09:12.014 Unknown (0Ch): Supported 00:09:12.014 Unknown (12h): Supported 00:09:12.014 Copy (19h): Supported LBA-Change 00:09:12.014 Unknown (1Dh): Supported LBA-Change 00:09:12.014 00:09:12.014 Error Log 00:09:12.014 ========= 00:09:12.014 00:09:12.014 Arbitration 00:09:12.014 =========== 00:09:12.014 Arbitration Burst: no limit 00:09:12.014 00:09:12.014 Power Management 00:09:12.014 ================ 00:09:12.014 Number of Power States: 1 00:09:12.014 Current Power State: Power State #0 00:09:12.014 Power State #0: 00:09:12.014 Max Power: 25.00 W 00:09:12.014 Non-Operational State: Operational 00:09:12.014 Entry Latency: 16 microseconds 00:09:12.014 Exit Latency: 4 microseconds 00:09:12.014 Relative Read Throughput: 0 00:09:12.014 Relative Read Latency: 0 00:09:12.014 Relative Write Throughput: 0 00:09:12.014 Relative Write Latency: 0 00:09:12.014 Idle Power: Not Reported 00:09:12.014 Active Power: Not Reported 00:09:12.014 Non-Operational Permissive Mode: Not Supported 00:09:12.014 00:09:12.014 Health Information 00:09:12.014 ================== 00:09:12.014 Critical Warnings: 00:09:12.014 Available Spare Space: OK 00:09:12.014 Temperature: OK 00:09:12.014 Device Reliability: OK 00:09:12.014 Read Only: No 00:09:12.014 Volatile Memory Backup: OK 00:09:12.014 Current Temperature: 323 Kelvin (50 Celsius) 00:09:12.014 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:12.014 Available Spare: 0% 00:09:12.014 Available Spare Threshold: 0% 00:09:12.014 Life Percentage Used: 0% 00:09:12.014 Data Units Read: 1301 00:09:12.014 Data Units Written: 601 00:09:12.014 Host Read Commands: 52254 00:09:12.014 Host Write Commands: 25503 00:09:12.014 Controller Busy Time: 0 minutes 00:09:12.014 Power Cycles: 0 00:09:12.014 Power On Hours: 0 hours 00:09:12.014 Unsafe Shutdowns: 0 00:09:12.014 Unrecoverable Media Errors: 0 00:09:12.014 Lifetime Error Log Entries: 0 00:09:12.014 Warning Temperature Time: 0 minutes 00:09:12.014 Critical Temperature Time: 0 minutes 00:09:12.014 00:09:12.014 Number of Queues 00:09:12.014 ================ 00:09:12.014 Number of I/O Submission Queues: 64 00:09:12.014 Number of I/O Completion Queues: 64 00:09:12.014 00:09:12.014 ZNS Specific Controller Data 00:09:12.014 ============================ 00:09:12.014 Zone Append Size Limit: 0 00:09:12.014 00:09:12.014 00:09:12.014 Active Namespaces 00:09:12.014 ================= 00:09:12.014 Namespace ID:1 00:09:12.014 Error Recovery Timeout: Unlimited 00:09:12.014 Command Set Identifier: NVM (00h) 00:09:12.014 Deallocate: Supported 00:09:12.014 Deallocated/Unwritten Error: Supported 00:09:12.014 Deallocated Read Value: All 0x00 00:09:12.014 Deallocate in Write Zeroes: Not Supported 00:09:12.014 Deallocated Guard Field: 0xFFFF 00:09:12.014 Flush: Supported 00:09:12.014 Reservation: Not Supported 00:09:12.014 Namespace Sharing Capabilities: Multiple Controllers 00:09:12.014 Size (in LBAs): 262144 (1GiB) 00:09:12.014 Capacity (in LBAs): 262144 (1GiB) 00:09:12.014 Utilization (in LBAs): 262144 (1GiB) 00:09:12.014 Thin Provisioning: Not Supported 00:09:12.014 Per-NS Atomic Units: No 00:09:12.014 Maximum Single Source Range Length: 128 00:09:12.014 Maximum Copy Length: 128 00:09:12.014 Maximum Source Range Count: 128 00:09:12.014 NGUID/EUI64 Never Reused: No 00:09:12.014 Namespace Write Protected: No 00:09:12.014 Endurance group ID: 1 00:09:12.014 Number of LBA Formats: 8 00:09:12.014 Current LBA Format: LBA Format #04 00:09:12.014 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:12.014 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:12.014 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:12.014 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:12.014 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:12.014 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:12.014 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:12.014 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:12.014 00:09:12.014 Get Feature FDP: 00:09:12.014 ================ 00:09:12.014 Enabled: Yes 00:09:12.014 FDP configuration index: 0 00:09:12.014 00:09:12.014 FDP configurations log page 00:09:12.014 =========================== 00:09:12.014 Number of FDP configurations: 1 00:09:12.014 Version: 0 00:09:12.014 Size: 112 00:09:12.014 FDP Configuration Descriptor: 0 00:09:12.014 Descriptor Size: 96 00:09:12.014 Reclaim Group Identifier format: 2 00:09:12.014 FDP Volatile Write Cache: Not Present 00:09:12.014 FDP Configuration: Valid 00:09:12.014 Vendor Specific Size: 0 00:09:12.014 Number of Reclaim Groups: 2 00:09:12.014 Number of Recalim Unit Handles: 8 00:09:12.014 Max Placement Identifiers: 128 00:09:12.014 Number of Namespaces Suppprted: 256 00:09:12.014 Reclaim unit Nominal Size: 6000000 bytes 00:09:12.014 Estimated Reclaim Unit Time Limit: Not Reported 00:09:12.014 RUH Desc #000: RUH Type: Initially Isolated 00:09:12.014 RUH Desc #001: RUH Type: Initially Isolated 00:09:12.014 RUH Desc #002: RUH Type: Initially Isolated 00:09:12.014 RUH Desc #003: RUH Type: Initially Isolated 00:09:12.014 RUH Desc #004: RUH Type: Initially Isolated 00:09:12.014 RUH Desc #005: RUH Type: Initially Isolated 00:09:12.014 RUH Desc #006: RUH Type: Initially Isolated 00:09:12.014 RUH Desc #007: RUH Type: Initially Isolated 00:09:12.014 00:09:12.014 FDP reclaim unit handle usage log page 00:09:12.014 ====================================== 00:09:12.014 Number of Reclaim Unit Handles: 8 00:09:12.014 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:12.014 RUH Usage Desc #001: RUH Attributes: Unused 00:09:12.014 RUH Usage Desc #002: RUH Attributes: Unused 00:09:12.014 RUH Usage Desc #003: RUH Attributes: Unused 00:09:12.014 RUH Usage Desc #004: RUH Attributes: Unused 00:09:12.014 RUH Usage Desc #005: RUH Attributes: Unused 00:09:12.014 RUH Usage Desc #006: RUH Attributes: Unused 00:09:12.014 RUH Usage Desc #007: RUH Attributes: Unused 00:09:12.014 00:09:12.015 FDP statistics log page 00:09:12.015 ======================= 00:09:12.015 Host bytes with metadata written: 400842752 00:09:12.015 Media bytes with metadata written: 400941056 00:09:12.015 Media bytes erased: 0 00:09:12.015 00:09:12.015 FDP events log page 00:09:12.015 =================== 00:09:12.015 Number of FDP events: 0 00:09:12.015 00:09:12.015 00:09:12.015 real 0m1.151s 00:09:12.015 user 0m0.391s 00:09:12.015 sys 0m0.533s 00:09:12.015 14:10:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:12.015 14:10:10 -- common/autotest_common.sh@10 -- # set +x 00:09:12.015 ************************************ 00:09:12.015 END TEST nvme_identify 00:09:12.015 ************************************ 00:09:12.273 14:10:10 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:09:12.273 14:10:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:12.273 14:10:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:12.273 14:10:10 -- common/autotest_common.sh@10 -- # set +x 00:09:12.273 ************************************ 00:09:12.273 START TEST nvme_perf 00:09:12.273 ************************************ 00:09:12.273 14:10:10 -- common/autotest_common.sh@1114 -- # nvme_perf 00:09:12.273 14:10:10 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:09:13.652 Initializing NVMe Controllers 00:09:13.652 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:13.652 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:13.652 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:13.652 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:13.652 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:09:13.652 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:09:13.652 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:09:13.652 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:09:13.652 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:09:13.652 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:09:13.652 Initialization complete. Launching workers. 00:09:13.652 ======================================================== 00:09:13.652 Latency(us) 00:09:13.652 Device Information : IOPS MiB/s Average min max 00:09:13.652 PCIE (0000:00:06.0) NSID 1 from core 0: 18125.68 212.41 7058.04 5311.27 32290.72 00:09:13.652 PCIE (0000:00:07.0) NSID 1 from core 0: 18125.68 212.41 7052.74 5014.63 30870.90 00:09:13.652 PCIE (0000:00:09.0) NSID 1 from core 0: 18125.68 212.41 7046.07 5496.65 30310.96 00:09:13.652 PCIE (0000:00:08.0) NSID 1 from core 0: 18125.68 212.41 7039.31 5517.95 28886.10 00:09:13.652 PCIE (0000:00:08.0) NSID 2 from core 0: 18125.68 212.41 7032.67 5479.38 27422.99 00:09:13.652 PCIE (0000:00:08.0) NSID 3 from core 0: 18253.33 213.91 6977.22 5504.99 18967.27 00:09:13.652 ======================================================== 00:09:13.652 Total : 108881.74 1275.96 7034.27 5014.63 32290.72 00:09:13.652 00:09:13.652 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:13.652 ================================================================================= 00:09:13.652 1.00000% : 5469.735us 00:09:13.652 10.00000% : 5747.003us 00:09:13.652 25.00000% : 6125.095us 00:09:13.652 50.00000% : 6704.837us 00:09:13.652 75.00000% : 7309.785us 00:09:13.652 90.00000% : 7813.908us 00:09:13.652 95.00000% : 9880.812us 00:09:13.652 98.00000% : 11897.305us 00:09:13.652 99.00000% : 13913.797us 00:09:13.652 99.50000% : 30045.735us 00:09:13.652 99.90000% : 31860.578us 00:09:13.652 99.99000% : 32263.877us 00:09:13.652 99.99900% : 32465.526us 00:09:13.652 99.99990% : 32465.526us 00:09:13.652 99.99999% : 32465.526us 00:09:13.652 00:09:13.652 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:13.652 ================================================================================= 00:09:13.652 1.00000% : 5620.972us 00:09:13.652 10.00000% : 5873.034us 00:09:13.652 25.00000% : 6200.714us 00:09:13.652 50.00000% : 6755.249us 00:09:13.652 75.00000% : 7259.372us 00:09:13.652 90.00000% : 7763.495us 00:09:13.652 95.00000% : 9578.338us 00:09:13.652 98.00000% : 11544.418us 00:09:13.652 99.00000% : 13913.797us 00:09:13.652 99.50000% : 28634.191us 00:09:13.652 99.90000% : 30449.034us 00:09:13.652 99.99000% : 30852.332us 00:09:13.652 99.99900% : 31053.982us 00:09:13.652 99.99990% : 31053.982us 00:09:13.652 99.99999% : 31053.982us 00:09:13.652 00:09:13.652 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:13.652 ================================================================================= 00:09:13.652 1.00000% : 5595.766us 00:09:13.652 10.00000% : 5873.034us 00:09:13.652 25.00000% : 6200.714us 00:09:13.652 50.00000% : 6755.249us 00:09:13.652 75.00000% : 7259.372us 00:09:13.652 90.00000% : 7813.908us 00:09:13.652 95.00000% : 9175.040us 00:09:13.652 98.00000% : 12300.603us 00:09:13.652 99.00000% : 14115.446us 00:09:13.652 99.50000% : 28029.243us 00:09:13.652 99.90000% : 29844.086us 00:09:13.652 99.99000% : 30449.034us 00:09:13.652 99.99900% : 30449.034us 00:09:13.652 99.99990% : 30449.034us 00:09:13.652 99.99999% : 30449.034us 00:09:13.652 00:09:13.652 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:13.652 ================================================================================= 00:09:13.652 1.00000% : 5620.972us 00:09:13.652 10.00000% : 5873.034us 00:09:13.652 25.00000% : 6200.714us 00:09:13.652 50.00000% : 6704.837us 00:09:13.652 75.00000% : 7259.372us 00:09:13.652 90.00000% : 7763.495us 00:09:13.652 95.00000% : 9074.215us 00:09:13.652 98.00000% : 12804.726us 00:09:13.652 99.00000% : 14518.745us 00:09:13.652 99.50000% : 26617.698us 00:09:13.652 99.90000% : 28432.542us 00:09:13.652 99.99000% : 29037.489us 00:09:13.652 99.99900% : 29037.489us 00:09:13.652 99.99990% : 29037.489us 00:09:13.652 99.99999% : 29037.489us 00:09:13.652 00:09:13.652 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:13.652 ================================================================================= 00:09:13.652 1.00000% : 5620.972us 00:09:13.652 10.00000% : 5847.828us 00:09:13.652 25.00000% : 6200.714us 00:09:13.652 50.00000% : 6704.837us 00:09:13.652 75.00000% : 7259.372us 00:09:13.652 90.00000% : 7713.083us 00:09:13.652 95.00000% : 9427.102us 00:09:13.652 98.00000% : 12754.314us 00:09:13.652 99.00000% : 14922.043us 00:09:13.652 99.50000% : 25105.329us 00:09:13.652 99.90000% : 27020.997us 00:09:13.652 99.99000% : 27424.295us 00:09:13.652 99.99900% : 27424.295us 00:09:13.652 99.99990% : 27424.295us 00:09:13.652 99.99999% : 27424.295us 00:09:13.652 00:09:13.652 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:13.652 ================================================================================= 00:09:13.652 1.00000% : 5620.972us 00:09:13.652 10.00000% : 5873.034us 00:09:13.652 25.00000% : 6200.714us 00:09:13.652 50.00000% : 6704.837us 00:09:13.652 75.00000% : 7208.960us 00:09:13.652 90.00000% : 7713.083us 00:09:13.652 95.00000% : 9729.575us 00:09:13.652 98.00000% : 12603.077us 00:09:13.652 99.00000% : 14417.920us 00:09:13.652 99.50000% : 16636.062us 00:09:13.652 99.90000% : 18551.729us 00:09:13.652 99.99000% : 18955.028us 00:09:13.652 99.99900% : 19055.852us 00:09:13.652 99.99990% : 19055.852us 00:09:13.652 99.99999% : 19055.852us 00:09:13.652 00:09:13.652 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:13.652 ============================================================================== 00:09:13.652 Range in us Cumulative IO count 00:09:13.652 5293.292 - 5318.498: 0.0110% ( 2) 00:09:13.652 5318.498 - 5343.705: 0.0220% ( 2) 00:09:13.652 5343.705 - 5368.911: 0.0825% ( 11) 00:09:13.652 5368.911 - 5394.117: 0.2311% ( 27) 00:09:13.652 5394.117 - 5419.323: 0.4897% ( 47) 00:09:13.652 5419.323 - 5444.529: 0.8913% ( 73) 00:09:13.652 5444.529 - 5469.735: 1.3369% ( 81) 00:09:13.652 5469.735 - 5494.942: 1.9311% ( 108) 00:09:13.652 5494.942 - 5520.148: 2.5583% ( 114) 00:09:13.652 5520.148 - 5545.354: 3.2405% ( 124) 00:09:13.652 5545.354 - 5570.560: 4.0548% ( 148) 00:09:13.652 5570.560 - 5595.766: 4.8801% ( 150) 00:09:13.652 5595.766 - 5620.972: 5.7163% ( 152) 00:09:13.652 5620.972 - 5646.178: 6.5801% ( 157) 00:09:13.652 5646.178 - 5671.385: 7.5594% ( 178) 00:09:13.652 5671.385 - 5696.591: 8.4727% ( 166) 00:09:13.652 5696.591 - 5721.797: 9.3915% ( 167) 00:09:13.652 5721.797 - 5747.003: 10.3763% ( 179) 00:09:13.652 5747.003 - 5772.209: 11.3391% ( 175) 00:09:13.652 5772.209 - 5797.415: 12.4175% ( 196) 00:09:13.652 5797.415 - 5822.622: 13.3748% ( 174) 00:09:13.652 5822.622 - 5847.828: 14.4256% ( 191) 00:09:13.652 5847.828 - 5873.034: 15.5315% ( 201) 00:09:13.652 5873.034 - 5898.240: 16.4338% ( 164) 00:09:13.652 5898.240 - 5923.446: 17.4076% ( 177) 00:09:13.652 5923.446 - 5948.652: 18.4859% ( 196) 00:09:13.652 5948.652 - 5973.858: 19.5588% ( 195) 00:09:13.652 5973.858 - 5999.065: 20.5381% ( 178) 00:09:13.652 5999.065 - 6024.271: 21.6164% ( 196) 00:09:13.652 6024.271 - 6049.477: 22.6507% ( 188) 00:09:13.652 6049.477 - 6074.683: 23.6631% ( 184) 00:09:13.652 6074.683 - 6099.889: 24.7139% ( 191) 00:09:13.652 6099.889 - 6125.095: 25.7978% ( 197) 00:09:13.652 6125.095 - 6150.302: 26.8046% ( 183) 00:09:13.652 6150.302 - 6175.508: 27.8884% ( 197) 00:09:13.652 6175.508 - 6200.714: 28.8897% ( 182) 00:09:13.652 6200.714 - 6225.920: 30.0066% ( 203) 00:09:13.652 6225.920 - 6251.126: 31.0189% ( 184) 00:09:13.652 6251.126 - 6276.332: 32.0478% ( 187) 00:09:13.652 6276.332 - 6301.538: 33.1316% ( 197) 00:09:13.652 6301.538 - 6326.745: 34.2320% ( 200) 00:09:13.652 6326.745 - 6351.951: 35.2718% ( 189) 00:09:13.653 6351.951 - 6377.157: 36.3391% ( 194) 00:09:13.653 6377.157 - 6402.363: 37.4010% ( 193) 00:09:13.653 6402.363 - 6427.569: 38.4518% ( 191) 00:09:13.653 6427.569 - 6452.775: 39.5632% ( 202) 00:09:13.653 6452.775 - 6503.188: 41.6868% ( 386) 00:09:13.653 6503.188 - 6553.600: 43.8490% ( 393) 00:09:13.653 6553.600 - 6604.012: 45.9782% ( 387) 00:09:13.653 6604.012 - 6654.425: 48.0689% ( 380) 00:09:13.653 6654.425 - 6704.837: 50.1485% ( 378) 00:09:13.653 6704.837 - 6755.249: 52.3052% ( 392) 00:09:13.653 6755.249 - 6805.662: 54.3574% ( 373) 00:09:13.653 6805.662 - 6856.074: 56.4866% ( 387) 00:09:13.653 6856.074 - 6906.486: 58.5387% ( 373) 00:09:13.653 6906.486 - 6956.898: 60.6404% ( 382) 00:09:13.653 6956.898 - 7007.311: 62.8906% ( 409) 00:09:13.653 7007.311 - 7057.723: 64.9923% ( 382) 00:09:13.653 7057.723 - 7108.135: 67.0390% ( 372) 00:09:13.653 7108.135 - 7158.548: 69.2011% ( 393) 00:09:13.653 7158.548 - 7208.960: 71.3028% ( 382) 00:09:13.653 7208.960 - 7259.372: 73.3660% ( 375) 00:09:13.653 7259.372 - 7309.785: 75.5392% ( 395) 00:09:13.653 7309.785 - 7360.197: 77.6408% ( 382) 00:09:13.653 7360.197 - 7410.609: 79.7975% ( 392) 00:09:13.653 7410.609 - 7461.022: 81.8772% ( 378) 00:09:13.653 7461.022 - 7511.434: 83.9514% ( 377) 00:09:13.653 7511.434 - 7561.846: 85.7339% ( 324) 00:09:13.653 7561.846 - 7612.258: 87.3019% ( 285) 00:09:13.653 7612.258 - 7662.671: 88.4353% ( 206) 00:09:13.653 7662.671 - 7713.083: 89.1780% ( 135) 00:09:13.653 7713.083 - 7763.495: 89.7777% ( 109) 00:09:13.653 7763.495 - 7813.908: 90.2454% ( 85) 00:09:13.653 7813.908 - 7864.320: 90.6030% ( 65) 00:09:13.653 7864.320 - 7914.732: 90.9331% ( 60) 00:09:13.653 7914.732 - 7965.145: 91.1862% ( 46) 00:09:13.653 7965.145 - 8015.557: 91.4228% ( 43) 00:09:13.653 8015.557 - 8065.969: 91.6538% ( 42) 00:09:13.653 8065.969 - 8116.382: 91.8299% ( 32) 00:09:13.653 8116.382 - 8166.794: 92.0114% ( 33) 00:09:13.653 8166.794 - 8217.206: 92.1545% ( 26) 00:09:13.653 8217.206 - 8267.618: 92.2755% ( 22) 00:09:13.653 8267.618 - 8318.031: 92.3691% ( 17) 00:09:13.653 8318.031 - 8368.443: 92.4516% ( 15) 00:09:13.653 8368.443 - 8418.855: 92.5561% ( 19) 00:09:13.653 8418.855 - 8469.268: 92.6386% ( 15) 00:09:13.653 8469.268 - 8519.680: 92.7047% ( 12) 00:09:13.653 8519.680 - 8570.092: 92.7872% ( 15) 00:09:13.653 8570.092 - 8620.505: 92.8642% ( 14) 00:09:13.653 8620.505 - 8670.917: 92.9412% ( 14) 00:09:13.653 8670.917 - 8721.329: 93.0403% ( 18) 00:09:13.653 8721.329 - 8771.742: 93.1393% ( 18) 00:09:13.653 8771.742 - 8822.154: 93.1943% ( 10) 00:09:13.653 8822.154 - 8872.566: 93.2824% ( 16) 00:09:13.653 8872.566 - 8922.978: 93.3429% ( 11) 00:09:13.653 8922.978 - 8973.391: 93.3924% ( 9) 00:09:13.653 8973.391 - 9023.803: 93.4529% ( 11) 00:09:13.653 9023.803 - 9074.215: 93.5134% ( 11) 00:09:13.653 9074.215 - 9124.628: 93.6015% ( 16) 00:09:13.653 9124.628 - 9175.040: 93.6620% ( 11) 00:09:13.653 9175.040 - 9225.452: 93.7335% ( 13) 00:09:13.653 9225.452 - 9275.865: 93.8270% ( 17) 00:09:13.653 9275.865 - 9326.277: 93.9261% ( 18) 00:09:13.653 9326.277 - 9376.689: 94.0196% ( 17) 00:09:13.653 9376.689 - 9427.102: 94.1241% ( 19) 00:09:13.653 9427.102 - 9477.514: 94.2342% ( 20) 00:09:13.653 9477.514 - 9527.926: 94.3277% ( 17) 00:09:13.653 9527.926 - 9578.338: 94.4432% ( 21) 00:09:13.653 9578.338 - 9628.751: 94.5533% ( 20) 00:09:13.653 9628.751 - 9679.163: 94.6523% ( 18) 00:09:13.653 9679.163 - 9729.575: 94.7733% ( 22) 00:09:13.653 9729.575 - 9779.988: 94.8669% ( 17) 00:09:13.653 9779.988 - 9830.400: 94.9604% ( 17) 00:09:13.653 9830.400 - 9880.812: 95.0759% ( 21) 00:09:13.653 9880.812 - 9931.225: 95.1860% ( 20) 00:09:13.653 9931.225 - 9981.637: 95.2905% ( 19) 00:09:13.653 9981.637 - 10032.049: 95.3950% ( 19) 00:09:13.653 10032.049 - 10082.462: 95.5051% ( 20) 00:09:13.653 10082.462 - 10132.874: 95.5876% ( 15) 00:09:13.653 10132.874 - 10183.286: 95.6756% ( 16) 00:09:13.653 10183.286 - 10233.698: 95.7746% ( 18) 00:09:13.653 10233.698 - 10284.111: 95.8572% ( 15) 00:09:13.653 10284.111 - 10334.523: 95.9507% ( 17) 00:09:13.653 10334.523 - 10384.935: 96.0167% ( 12) 00:09:13.653 10384.935 - 10435.348: 96.1048% ( 16) 00:09:13.653 10435.348 - 10485.760: 96.1983% ( 17) 00:09:13.653 10485.760 - 10536.172: 96.2863% ( 16) 00:09:13.653 10536.172 - 10586.585: 96.3798% ( 17) 00:09:13.653 10586.585 - 10636.997: 96.4624% ( 15) 00:09:13.653 10636.997 - 10687.409: 96.5559% ( 17) 00:09:13.653 10687.409 - 10737.822: 96.6494% ( 17) 00:09:13.653 10737.822 - 10788.234: 96.7265% ( 14) 00:09:13.653 10788.234 - 10838.646: 96.8255% ( 18) 00:09:13.653 10838.646 - 10889.058: 96.8860% ( 11) 00:09:13.653 10889.058 - 10939.471: 96.9575% ( 13) 00:09:13.653 10939.471 - 10989.883: 97.0180% ( 11) 00:09:13.653 10989.883 - 11040.295: 97.0951% ( 14) 00:09:13.653 11040.295 - 11090.708: 97.1886% ( 17) 00:09:13.653 11090.708 - 11141.120: 97.2601% ( 13) 00:09:13.653 11141.120 - 11191.532: 97.3426% ( 15) 00:09:13.653 11191.532 - 11241.945: 97.4307% ( 16) 00:09:13.653 11241.945 - 11292.357: 97.4967% ( 12) 00:09:13.653 11292.357 - 11342.769: 97.5847% ( 16) 00:09:13.653 11342.769 - 11393.182: 97.6673% ( 15) 00:09:13.653 11393.182 - 11443.594: 97.7223% ( 10) 00:09:13.653 11443.594 - 11494.006: 97.7718% ( 9) 00:09:13.653 11494.006 - 11544.418: 97.8048% ( 6) 00:09:13.653 11544.418 - 11594.831: 97.8488% ( 8) 00:09:13.653 11594.831 - 11645.243: 97.8818% ( 6) 00:09:13.653 11645.243 - 11695.655: 97.9093% ( 5) 00:09:13.653 11695.655 - 11746.068: 97.9368% ( 5) 00:09:13.653 11746.068 - 11796.480: 97.9588% ( 4) 00:09:13.653 11796.480 - 11846.892: 97.9754% ( 3) 00:09:13.653 11846.892 - 11897.305: 98.0084% ( 6) 00:09:13.653 11897.305 - 11947.717: 98.0304% ( 4) 00:09:13.653 11947.717 - 11998.129: 98.0634% ( 6) 00:09:13.653 11998.129 - 12048.542: 98.0854% ( 4) 00:09:13.653 12048.542 - 12098.954: 98.1074% ( 4) 00:09:13.653 12098.954 - 12149.366: 98.1404% ( 6) 00:09:13.653 12149.366 - 12199.778: 98.1569% ( 3) 00:09:13.653 12199.778 - 12250.191: 98.2009% ( 8) 00:09:13.653 12250.191 - 12300.603: 98.2119% ( 2) 00:09:13.653 12300.603 - 12351.015: 98.2339% ( 4) 00:09:13.653 12351.015 - 12401.428: 98.2669% ( 6) 00:09:13.653 12401.428 - 12451.840: 98.2890% ( 4) 00:09:13.653 12451.840 - 12502.252: 98.3110% ( 4) 00:09:13.653 12502.252 - 12552.665: 98.3440% ( 6) 00:09:13.653 12552.665 - 12603.077: 98.3605% ( 3) 00:09:13.653 12603.077 - 12653.489: 98.3880% ( 5) 00:09:13.653 12653.489 - 12703.902: 98.4155% ( 5) 00:09:13.653 12703.902 - 12754.314: 98.4375% ( 4) 00:09:13.653 12754.314 - 12804.726: 98.4650% ( 5) 00:09:13.653 12804.726 - 12855.138: 98.4815% ( 3) 00:09:13.653 12855.138 - 12905.551: 98.5200% ( 7) 00:09:13.653 12905.551 - 13006.375: 98.5750% ( 10) 00:09:13.653 13006.375 - 13107.200: 98.6191% ( 8) 00:09:13.653 13107.200 - 13208.025: 98.6631% ( 8) 00:09:13.653 13208.025 - 13308.849: 98.7181% ( 10) 00:09:13.653 13308.849 - 13409.674: 98.7566% ( 7) 00:09:13.653 13409.674 - 13510.498: 98.8116% ( 10) 00:09:13.653 13510.498 - 13611.323: 98.8666% ( 10) 00:09:13.653 13611.323 - 13712.148: 98.9107% ( 8) 00:09:13.653 13712.148 - 13812.972: 98.9547% ( 8) 00:09:13.653 13812.972 - 13913.797: 99.0097% ( 10) 00:09:13.653 13913.797 - 14014.622: 99.0537% ( 8) 00:09:13.653 14014.622 - 14115.446: 99.0977% ( 8) 00:09:13.653 14115.446 - 14216.271: 99.1197% ( 4) 00:09:13.653 14216.271 - 14317.095: 99.1582% ( 7) 00:09:13.653 14317.095 - 14417.920: 99.1857% ( 5) 00:09:13.653 14417.920 - 14518.745: 99.2188% ( 6) 00:09:13.653 14518.745 - 14619.569: 99.2518% ( 6) 00:09:13.653 14619.569 - 14720.394: 99.2848% ( 6) 00:09:13.653 14720.394 - 14821.218: 99.2958% ( 2) 00:09:13.653 28634.191 - 28835.840: 99.3068% ( 2) 00:09:13.653 28835.840 - 29037.489: 99.3398% ( 6) 00:09:13.653 29037.489 - 29239.138: 99.3783% ( 7) 00:09:13.653 29239.138 - 29440.788: 99.4168% ( 7) 00:09:13.653 29440.788 - 29642.437: 99.4608% ( 8) 00:09:13.653 29642.437 - 29844.086: 99.4938% ( 6) 00:09:13.653 29844.086 - 30045.735: 99.5379% ( 8) 00:09:13.653 30045.735 - 30247.385: 99.5819% ( 8) 00:09:13.653 30247.385 - 30449.034: 99.6204% ( 7) 00:09:13.653 30449.034 - 30650.683: 99.6534% ( 6) 00:09:13.653 30650.683 - 30852.332: 99.7029% ( 9) 00:09:13.653 30852.332 - 31053.982: 99.7469% ( 8) 00:09:13.653 31053.982 - 31255.631: 99.7854% ( 7) 00:09:13.653 31255.631 - 31457.280: 99.8294% ( 8) 00:09:13.653 31457.280 - 31658.929: 99.8735% ( 8) 00:09:13.653 31658.929 - 31860.578: 99.9175% ( 8) 00:09:13.653 31860.578 - 32062.228: 99.9560% ( 7) 00:09:13.653 32062.228 - 32263.877: 99.9945% ( 7) 00:09:13.653 32263.877 - 32465.526: 100.0000% ( 1) 00:09:13.653 00:09:13.653 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:13.653 ============================================================================== 00:09:13.653 Range in us Cumulative IO count 00:09:13.653 4990.818 - 5016.025: 0.0055% ( 1) 00:09:13.653 5016.025 - 5041.231: 0.0110% ( 1) 00:09:13.653 5041.231 - 5066.437: 0.0165% ( 1) 00:09:13.653 5066.437 - 5091.643: 0.0275% ( 2) 00:09:13.653 5091.643 - 5116.849: 0.0330% ( 1) 00:09:13.653 5116.849 - 5142.055: 0.0385% ( 1) 00:09:13.653 5142.055 - 5167.262: 0.0440% ( 1) 00:09:13.653 5167.262 - 5192.468: 0.0550% ( 2) 00:09:13.653 5192.468 - 5217.674: 0.0605% ( 1) 00:09:13.653 5217.674 - 5242.880: 0.0660% ( 1) 00:09:13.653 5242.880 - 5268.086: 0.0715% ( 1) 00:09:13.654 5268.086 - 5293.292: 0.0825% ( 2) 00:09:13.654 5293.292 - 5318.498: 0.0880% ( 1) 00:09:13.654 5318.498 - 5343.705: 0.0935% ( 1) 00:09:13.654 5343.705 - 5368.911: 0.0990% ( 1) 00:09:13.654 5368.911 - 5394.117: 0.1100% ( 2) 00:09:13.654 5394.117 - 5419.323: 0.1155% ( 1) 00:09:13.654 5419.323 - 5444.529: 0.1210% ( 1) 00:09:13.654 5444.529 - 5469.735: 0.1265% ( 1) 00:09:13.654 5469.735 - 5494.942: 0.1761% ( 9) 00:09:13.654 5494.942 - 5520.148: 0.2146% ( 7) 00:09:13.654 5520.148 - 5545.354: 0.3081% ( 17) 00:09:13.654 5545.354 - 5570.560: 0.5282% ( 40) 00:09:13.654 5570.560 - 5595.766: 0.8088% ( 51) 00:09:13.654 5595.766 - 5620.972: 1.2489% ( 80) 00:09:13.654 5620.972 - 5646.178: 2.1567% ( 165) 00:09:13.654 5646.178 - 5671.385: 3.0755% ( 167) 00:09:13.654 5671.385 - 5696.591: 3.9062% ( 151) 00:09:13.654 5696.591 - 5721.797: 4.7535% ( 154) 00:09:13.654 5721.797 - 5747.003: 5.5898% ( 152) 00:09:13.654 5747.003 - 5772.209: 6.5251% ( 170) 00:09:13.654 5772.209 - 5797.415: 7.5539% ( 187) 00:09:13.654 5797.415 - 5822.622: 8.5772% ( 186) 00:09:13.654 5822.622 - 5847.828: 9.6941% ( 203) 00:09:13.654 5847.828 - 5873.034: 10.7064% ( 184) 00:09:13.654 5873.034 - 5898.240: 11.8013% ( 199) 00:09:13.654 5898.240 - 5923.446: 12.8741% ( 195) 00:09:13.654 5923.446 - 5948.652: 13.9745% ( 200) 00:09:13.654 5948.652 - 5973.858: 15.0033% ( 187) 00:09:13.654 5973.858 - 5999.065: 16.1257% ( 204) 00:09:13.654 5999.065 - 6024.271: 17.2535% ( 205) 00:09:13.654 6024.271 - 6049.477: 18.5739% ( 240) 00:09:13.654 6049.477 - 6074.683: 19.7128% ( 207) 00:09:13.654 6074.683 - 6099.889: 20.8462% ( 206) 00:09:13.654 6099.889 - 6125.095: 21.9355% ( 198) 00:09:13.654 6125.095 - 6150.302: 23.0689% ( 206) 00:09:13.654 6150.302 - 6175.508: 24.2243% ( 210) 00:09:13.654 6175.508 - 6200.714: 25.4787% ( 228) 00:09:13.654 6200.714 - 6225.920: 26.6505% ( 213) 00:09:13.654 6225.920 - 6251.126: 27.8774% ( 223) 00:09:13.654 6251.126 - 6276.332: 29.0438% ( 212) 00:09:13.654 6276.332 - 6301.538: 30.2212% ( 214) 00:09:13.654 6301.538 - 6326.745: 31.3985% ( 214) 00:09:13.654 6326.745 - 6351.951: 32.6144% ( 221) 00:09:13.654 6351.951 - 6377.157: 33.7808% ( 212) 00:09:13.654 6377.157 - 6402.363: 35.0352% ( 228) 00:09:13.654 6402.363 - 6427.569: 36.2676% ( 224) 00:09:13.654 6427.569 - 6452.775: 37.5275% ( 229) 00:09:13.654 6452.775 - 6503.188: 39.9538% ( 441) 00:09:13.654 6503.188 - 6553.600: 42.3966% ( 444) 00:09:13.654 6553.600 - 6604.012: 44.8393% ( 444) 00:09:13.654 6604.012 - 6654.425: 47.3041% ( 448) 00:09:13.654 6654.425 - 6704.837: 49.7414% ( 443) 00:09:13.654 6704.837 - 6755.249: 52.2172% ( 450) 00:09:13.654 6755.249 - 6805.662: 54.6655% ( 445) 00:09:13.654 6805.662 - 6856.074: 57.1358% ( 449) 00:09:13.654 6856.074 - 6906.486: 59.5731% ( 443) 00:09:13.654 6906.486 - 6956.898: 62.0544% ( 451) 00:09:13.654 6956.898 - 7007.311: 64.4916% ( 443) 00:09:13.654 7007.311 - 7057.723: 67.0114% ( 458) 00:09:13.654 7057.723 - 7108.135: 69.4267% ( 439) 00:09:13.654 7108.135 - 7158.548: 71.9520% ( 459) 00:09:13.654 7158.548 - 7208.960: 74.4058% ( 446) 00:09:13.654 7208.960 - 7259.372: 76.9146% ( 456) 00:09:13.654 7259.372 - 7309.785: 79.3629% ( 445) 00:09:13.654 7309.785 - 7360.197: 81.7617% ( 436) 00:09:13.654 7360.197 - 7410.609: 83.8193% ( 374) 00:09:13.654 7410.609 - 7461.022: 85.6129% ( 326) 00:09:13.654 7461.022 - 7511.434: 86.9388% ( 241) 00:09:13.654 7511.434 - 7561.846: 87.8136% ( 159) 00:09:13.654 7561.846 - 7612.258: 88.5178% ( 128) 00:09:13.654 7612.258 - 7662.671: 89.1560% ( 116) 00:09:13.654 7662.671 - 7713.083: 89.6237% ( 85) 00:09:13.654 7713.083 - 7763.495: 90.0418% ( 76) 00:09:13.654 7763.495 - 7813.908: 90.3664% ( 59) 00:09:13.654 7813.908 - 7864.320: 90.6745% ( 56) 00:09:13.654 7864.320 - 7914.732: 90.9661% ( 53) 00:09:13.654 7914.732 - 7965.145: 91.2027% ( 43) 00:09:13.654 7965.145 - 8015.557: 91.3952% ( 35) 00:09:13.654 8015.557 - 8065.969: 91.5658% ( 31) 00:09:13.654 8065.969 - 8116.382: 91.6978% ( 24) 00:09:13.654 8116.382 - 8166.794: 91.8299% ( 24) 00:09:13.654 8166.794 - 8217.206: 91.9674% ( 25) 00:09:13.654 8217.206 - 8267.618: 92.1215% ( 28) 00:09:13.654 8267.618 - 8318.031: 92.2480% ( 23) 00:09:13.654 8318.031 - 8368.443: 92.3966% ( 27) 00:09:13.654 8368.443 - 8418.855: 92.5671% ( 31) 00:09:13.654 8418.855 - 8469.268: 92.6717% ( 19) 00:09:13.654 8469.268 - 8519.680: 92.7817% ( 20) 00:09:13.654 8519.680 - 8570.092: 92.9082% ( 23) 00:09:13.654 8570.092 - 8620.505: 93.0348% ( 23) 00:09:13.654 8620.505 - 8670.917: 93.1723% ( 25) 00:09:13.654 8670.917 - 8721.329: 93.2934% ( 22) 00:09:13.654 8721.329 - 8771.742: 93.4089% ( 21) 00:09:13.654 8771.742 - 8822.154: 93.4969% ( 16) 00:09:13.654 8822.154 - 8872.566: 93.6015% ( 19) 00:09:13.654 8872.566 - 8922.978: 93.7060% ( 19) 00:09:13.654 8922.978 - 8973.391: 93.7940% ( 16) 00:09:13.654 8973.391 - 9023.803: 93.8930% ( 18) 00:09:13.654 9023.803 - 9074.215: 93.9921% ( 18) 00:09:13.654 9074.215 - 9124.628: 94.1021% ( 20) 00:09:13.654 9124.628 - 9175.040: 94.2176% ( 21) 00:09:13.654 9175.040 - 9225.452: 94.3387% ( 22) 00:09:13.654 9225.452 - 9275.865: 94.4597% ( 22) 00:09:13.654 9275.865 - 9326.277: 94.5698% ( 20) 00:09:13.654 9326.277 - 9376.689: 94.6908% ( 22) 00:09:13.654 9376.689 - 9427.102: 94.8063% ( 21) 00:09:13.654 9427.102 - 9477.514: 94.9054% ( 18) 00:09:13.654 9477.514 - 9527.926: 94.9934% ( 16) 00:09:13.654 9527.926 - 9578.338: 95.0594% ( 12) 00:09:13.654 9578.338 - 9628.751: 95.1364% ( 14) 00:09:13.654 9628.751 - 9679.163: 95.2025% ( 12) 00:09:13.654 9679.163 - 9729.575: 95.2685% ( 12) 00:09:13.654 9729.575 - 9779.988: 95.3290% ( 11) 00:09:13.654 9779.988 - 9830.400: 95.4060% ( 14) 00:09:13.654 9830.400 - 9880.812: 95.4886% ( 15) 00:09:13.654 9880.812 - 9931.225: 95.5876% ( 18) 00:09:13.654 9931.225 - 9981.637: 95.6976% ( 20) 00:09:13.654 9981.637 - 10032.049: 95.7857% ( 16) 00:09:13.654 10032.049 - 10082.462: 95.8627% ( 14) 00:09:13.654 10082.462 - 10132.874: 95.9562% ( 17) 00:09:13.654 10132.874 - 10183.286: 96.0497% ( 17) 00:09:13.654 10183.286 - 10233.698: 96.1543% ( 19) 00:09:13.654 10233.698 - 10284.111: 96.2478% ( 17) 00:09:13.654 10284.111 - 10334.523: 96.3523% ( 19) 00:09:13.654 10334.523 - 10384.935: 96.4404% ( 16) 00:09:13.654 10384.935 - 10435.348: 96.5339% ( 17) 00:09:13.654 10435.348 - 10485.760: 96.6219% ( 16) 00:09:13.654 10485.760 - 10536.172: 96.7044% ( 15) 00:09:13.654 10536.172 - 10586.585: 96.7870% ( 15) 00:09:13.654 10586.585 - 10636.997: 96.8640% ( 14) 00:09:13.654 10636.997 - 10687.409: 96.9410% ( 14) 00:09:13.654 10687.409 - 10737.822: 97.0125% ( 13) 00:09:13.654 10737.822 - 10788.234: 97.0951% ( 15) 00:09:13.654 10788.234 - 10838.646: 97.1721% ( 14) 00:09:13.654 10838.646 - 10889.058: 97.2491% ( 14) 00:09:13.654 10889.058 - 10939.471: 97.3316% ( 15) 00:09:13.654 10939.471 - 10989.883: 97.3922% ( 11) 00:09:13.654 10989.883 - 11040.295: 97.4527% ( 11) 00:09:13.654 11040.295 - 11090.708: 97.5077% ( 10) 00:09:13.654 11090.708 - 11141.120: 97.5737% ( 12) 00:09:13.654 11141.120 - 11191.532: 97.6177% ( 8) 00:09:13.654 11191.532 - 11241.945: 97.6838% ( 12) 00:09:13.654 11241.945 - 11292.357: 97.7333% ( 9) 00:09:13.654 11292.357 - 11342.769: 97.7993% ( 12) 00:09:13.654 11342.769 - 11393.182: 97.8598% ( 11) 00:09:13.654 11393.182 - 11443.594: 97.9258% ( 12) 00:09:13.654 11443.594 - 11494.006: 97.9809% ( 10) 00:09:13.654 11494.006 - 11544.418: 98.0359% ( 10) 00:09:13.654 11544.418 - 11594.831: 98.0854% ( 9) 00:09:13.654 11594.831 - 11645.243: 98.1459% ( 11) 00:09:13.654 11645.243 - 11695.655: 98.1899% ( 8) 00:09:13.654 11695.655 - 11746.068: 98.2284% ( 7) 00:09:13.654 11746.068 - 11796.480: 98.2614% ( 6) 00:09:13.654 11796.480 - 11846.892: 98.2835% ( 4) 00:09:13.654 11846.892 - 11897.305: 98.3000% ( 3) 00:09:13.654 11897.305 - 11947.717: 98.3165% ( 3) 00:09:13.654 11947.717 - 11998.129: 98.3275% ( 2) 00:09:13.654 11998.129 - 12048.542: 98.3330% ( 1) 00:09:13.654 12048.542 - 12098.954: 98.3440% ( 2) 00:09:13.654 12098.954 - 12149.366: 98.3550% ( 2) 00:09:13.654 12149.366 - 12199.778: 98.3660% ( 2) 00:09:13.654 12199.778 - 12250.191: 98.3770% ( 2) 00:09:13.654 12250.191 - 12300.603: 98.3880% ( 2) 00:09:13.654 12300.603 - 12351.015: 98.3935% ( 1) 00:09:13.654 12351.015 - 12401.428: 98.4045% ( 2) 00:09:13.654 12401.428 - 12451.840: 98.4100% ( 1) 00:09:13.654 12451.840 - 12502.252: 98.4210% ( 2) 00:09:13.654 12502.252 - 12552.665: 98.4320% ( 2) 00:09:13.654 12552.665 - 12603.077: 98.4430% ( 2) 00:09:13.654 12603.077 - 12653.489: 98.4540% ( 2) 00:09:13.654 12653.489 - 12703.902: 98.4650% ( 2) 00:09:13.654 12703.902 - 12754.314: 98.4760% ( 2) 00:09:13.654 12754.314 - 12804.726: 98.5090% ( 6) 00:09:13.654 12804.726 - 12855.138: 98.5365% ( 5) 00:09:13.654 12855.138 - 12905.551: 98.5695% ( 6) 00:09:13.654 12905.551 - 13006.375: 98.6246% ( 10) 00:09:13.654 13006.375 - 13107.200: 98.6796% ( 10) 00:09:13.654 13107.200 - 13208.025: 98.7401% ( 11) 00:09:13.654 13208.025 - 13308.849: 98.8006% ( 11) 00:09:13.654 13308.849 - 13409.674: 98.8446% ( 8) 00:09:13.654 13409.674 - 13510.498: 98.8831% ( 7) 00:09:13.654 13510.498 - 13611.323: 98.9217% ( 7) 00:09:13.654 13611.323 - 13712.148: 98.9602% ( 7) 00:09:13.654 13712.148 - 13812.972: 98.9987% ( 7) 00:09:13.654 13812.972 - 13913.797: 99.0372% ( 7) 00:09:13.654 13913.797 - 14014.622: 99.0757% ( 7) 00:09:13.654 14014.622 - 14115.446: 99.1142% ( 7) 00:09:13.655 14115.446 - 14216.271: 99.1527% ( 7) 00:09:13.655 14216.271 - 14317.095: 99.1912% ( 7) 00:09:13.655 14317.095 - 14417.920: 99.2298% ( 7) 00:09:13.655 14417.920 - 14518.745: 99.2683% ( 7) 00:09:13.655 14518.745 - 14619.569: 99.2958% ( 5) 00:09:13.655 27424.295 - 27625.945: 99.3178% ( 4) 00:09:13.655 27625.945 - 27827.594: 99.3618% ( 8) 00:09:13.655 27827.594 - 28029.243: 99.4003% ( 7) 00:09:13.655 28029.243 - 28230.892: 99.4443% ( 8) 00:09:13.655 28230.892 - 28432.542: 99.4828% ( 7) 00:09:13.655 28432.542 - 28634.191: 99.5268% ( 8) 00:09:13.655 28634.191 - 28835.840: 99.5709% ( 8) 00:09:13.655 28835.840 - 29037.489: 99.6149% ( 8) 00:09:13.655 29037.489 - 29239.138: 99.6534% ( 7) 00:09:13.655 29239.138 - 29440.788: 99.6919% ( 7) 00:09:13.655 29440.788 - 29642.437: 99.7359% ( 8) 00:09:13.655 29642.437 - 29844.086: 99.7799% ( 8) 00:09:13.655 29844.086 - 30045.735: 99.8239% ( 8) 00:09:13.655 30045.735 - 30247.385: 99.8680% ( 8) 00:09:13.655 30247.385 - 30449.034: 99.9065% ( 7) 00:09:13.655 30449.034 - 30650.683: 99.9505% ( 8) 00:09:13.655 30650.683 - 30852.332: 99.9945% ( 8) 00:09:13.655 30852.332 - 31053.982: 100.0000% ( 1) 00:09:13.655 00:09:13.655 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:13.655 ============================================================================== 00:09:13.655 Range in us Cumulative IO count 00:09:13.655 5494.942 - 5520.148: 0.0770% ( 14) 00:09:13.655 5520.148 - 5545.354: 0.2311% ( 28) 00:09:13.655 5545.354 - 5570.560: 0.4842% ( 46) 00:09:13.655 5570.560 - 5595.766: 1.0398% ( 101) 00:09:13.655 5595.766 - 5620.972: 1.5790% ( 98) 00:09:13.655 5620.972 - 5646.178: 2.1787% ( 109) 00:09:13.655 5646.178 - 5671.385: 2.8939% ( 130) 00:09:13.655 5671.385 - 5696.591: 3.6697% ( 141) 00:09:13.655 5696.591 - 5721.797: 4.6215% ( 173) 00:09:13.655 5721.797 - 5747.003: 5.6668% ( 190) 00:09:13.655 5747.003 - 5772.209: 6.6186% ( 173) 00:09:13.655 5772.209 - 5797.415: 7.6585% ( 189) 00:09:13.655 5797.415 - 5822.622: 8.6873% ( 187) 00:09:13.655 5822.622 - 5847.828: 9.7876% ( 200) 00:09:13.655 5847.828 - 5873.034: 10.9430% ( 210) 00:09:13.655 5873.034 - 5898.240: 12.1204% ( 214) 00:09:13.655 5898.240 - 5923.446: 13.1602% ( 189) 00:09:13.655 5923.446 - 5948.652: 14.2496% ( 198) 00:09:13.655 5948.652 - 5973.858: 15.4324% ( 215) 00:09:13.655 5973.858 - 5999.065: 16.5603% ( 205) 00:09:13.655 5999.065 - 6024.271: 17.7047% ( 208) 00:09:13.655 6024.271 - 6049.477: 18.8050% ( 200) 00:09:13.655 6049.477 - 6074.683: 19.9714% ( 212) 00:09:13.655 6074.683 - 6099.889: 21.1378% ( 212) 00:09:13.655 6099.889 - 6125.095: 22.2986% ( 211) 00:09:13.655 6125.095 - 6150.302: 23.4540% ( 210) 00:09:13.655 6150.302 - 6175.508: 24.6424% ( 216) 00:09:13.655 6175.508 - 6200.714: 25.8363% ( 217) 00:09:13.655 6200.714 - 6225.920: 26.9641% ( 205) 00:09:13.655 6225.920 - 6251.126: 28.2075% ( 226) 00:09:13.655 6251.126 - 6276.332: 29.4564% ( 227) 00:09:13.655 6276.332 - 6301.538: 30.6723% ( 221) 00:09:13.655 6301.538 - 6326.745: 31.8112% ( 207) 00:09:13.655 6326.745 - 6351.951: 32.9996% ( 216) 00:09:13.655 6351.951 - 6377.157: 34.1934% ( 217) 00:09:13.655 6377.157 - 6402.363: 35.4093% ( 221) 00:09:13.655 6402.363 - 6427.569: 36.5592% ( 209) 00:09:13.655 6427.569 - 6452.775: 37.7476% ( 216) 00:09:13.655 6452.775 - 6503.188: 40.1188% ( 431) 00:09:13.655 6503.188 - 6553.600: 42.5286% ( 438) 00:09:13.655 6553.600 - 6604.012: 44.9494% ( 440) 00:09:13.655 6604.012 - 6654.425: 47.3041% ( 428) 00:09:13.655 6654.425 - 6704.837: 49.7799% ( 450) 00:09:13.655 6704.837 - 6755.249: 52.2282% ( 445) 00:09:13.655 6755.249 - 6805.662: 54.6600% ( 442) 00:09:13.655 6805.662 - 6856.074: 57.1303% ( 449) 00:09:13.655 6856.074 - 6906.486: 59.5346% ( 437) 00:09:13.655 6906.486 - 6956.898: 62.0379% ( 455) 00:09:13.655 6956.898 - 7007.311: 64.4146% ( 432) 00:09:13.655 7007.311 - 7057.723: 66.9454% ( 460) 00:09:13.655 7057.723 - 7108.135: 69.3607% ( 439) 00:09:13.655 7108.135 - 7158.548: 71.8805% ( 458) 00:09:13.655 7158.548 - 7208.960: 74.3728% ( 453) 00:09:13.655 7208.960 - 7259.372: 76.8871% ( 457) 00:09:13.655 7259.372 - 7309.785: 79.3959% ( 456) 00:09:13.655 7309.785 - 7360.197: 81.7892% ( 435) 00:09:13.655 7360.197 - 7410.609: 83.9459% ( 392) 00:09:13.655 7410.609 - 7461.022: 85.6459% ( 309) 00:09:13.655 7461.022 - 7511.434: 86.9938% ( 245) 00:09:13.655 7511.434 - 7561.846: 87.8576% ( 157) 00:09:13.655 7561.846 - 7612.258: 88.5453% ( 125) 00:09:13.655 7612.258 - 7662.671: 89.0680% ( 95) 00:09:13.655 7662.671 - 7713.083: 89.5632% ( 90) 00:09:13.655 7713.083 - 7763.495: 89.9923% ( 78) 00:09:13.655 7763.495 - 7813.908: 90.4049% ( 75) 00:09:13.655 7813.908 - 7864.320: 90.7625% ( 65) 00:09:13.655 7864.320 - 7914.732: 91.0982% ( 61) 00:09:13.655 7914.732 - 7965.145: 91.3622% ( 48) 00:09:13.655 7965.145 - 8015.557: 91.5933% ( 42) 00:09:13.655 8015.557 - 8065.969: 91.7859% ( 35) 00:09:13.655 8065.969 - 8116.382: 91.9674% ( 33) 00:09:13.655 8116.382 - 8166.794: 92.1380% ( 31) 00:09:13.655 8166.794 - 8217.206: 92.3140% ( 32) 00:09:13.655 8217.206 - 8267.618: 92.4956% ( 33) 00:09:13.655 8267.618 - 8318.031: 92.6607% ( 30) 00:09:13.655 8318.031 - 8368.443: 92.8367% ( 32) 00:09:13.655 8368.443 - 8418.855: 93.0183% ( 33) 00:09:13.655 8418.855 - 8469.268: 93.2108% ( 35) 00:09:13.655 8469.268 - 8519.680: 93.3979% ( 34) 00:09:13.655 8519.680 - 8570.092: 93.5684% ( 31) 00:09:13.655 8570.092 - 8620.505: 93.7335% ( 30) 00:09:13.655 8620.505 - 8670.917: 93.8765% ( 26) 00:09:13.655 8670.917 - 8721.329: 94.0471% ( 31) 00:09:13.655 8721.329 - 8771.742: 94.1901% ( 26) 00:09:13.655 8771.742 - 8822.154: 94.3552% ( 30) 00:09:13.655 8822.154 - 8872.566: 94.4872% ( 24) 00:09:13.655 8872.566 - 8922.978: 94.6248% ( 25) 00:09:13.655 8922.978 - 8973.391: 94.7403% ( 21) 00:09:13.655 8973.391 - 9023.803: 94.8393% ( 18) 00:09:13.655 9023.803 - 9074.215: 94.9274% ( 16) 00:09:13.655 9074.215 - 9124.628: 94.9989% ( 13) 00:09:13.655 9124.628 - 9175.040: 95.0594% ( 11) 00:09:13.655 9175.040 - 9225.452: 95.1364% ( 14) 00:09:13.655 9225.452 - 9275.865: 95.1970% ( 11) 00:09:13.655 9275.865 - 9326.277: 95.2685% ( 13) 00:09:13.655 9326.277 - 9376.689: 95.3290% ( 11) 00:09:13.655 9376.689 - 9427.102: 95.3950% ( 12) 00:09:13.655 9427.102 - 9477.514: 95.4555% ( 11) 00:09:13.655 9477.514 - 9527.926: 95.5271% ( 13) 00:09:13.655 9527.926 - 9578.338: 95.5931% ( 12) 00:09:13.655 9578.338 - 9628.751: 95.6646% ( 13) 00:09:13.655 9628.751 - 9679.163: 95.7361% ( 13) 00:09:13.655 9679.163 - 9729.575: 95.8132% ( 14) 00:09:13.655 9729.575 - 9779.988: 95.8737% ( 11) 00:09:13.655 9779.988 - 9830.400: 95.9507% ( 14) 00:09:13.655 9830.400 - 9880.812: 96.0222% ( 13) 00:09:13.655 9880.812 - 9931.225: 96.0938% ( 13) 00:09:13.655 9931.225 - 9981.637: 96.1323% ( 7) 00:09:13.655 9981.637 - 10032.049: 96.1873% ( 10) 00:09:13.655 10032.049 - 10082.462: 96.2423% ( 10) 00:09:13.655 10082.462 - 10132.874: 96.2973% ( 10) 00:09:13.655 10132.874 - 10183.286: 96.3468% ( 9) 00:09:13.655 10183.286 - 10233.698: 96.3963% ( 9) 00:09:13.655 10233.698 - 10284.111: 96.4239% ( 5) 00:09:13.655 10284.111 - 10334.523: 96.4569% ( 6) 00:09:13.655 10334.523 - 10384.935: 96.4899% ( 6) 00:09:13.655 10384.935 - 10435.348: 96.5284% ( 7) 00:09:13.655 10435.348 - 10485.760: 96.5504% ( 4) 00:09:13.655 10485.760 - 10536.172: 96.5834% ( 6) 00:09:13.655 10536.172 - 10586.585: 96.6054% ( 4) 00:09:13.655 10586.585 - 10636.997: 96.6384% ( 6) 00:09:13.655 10636.997 - 10687.409: 96.6714% ( 6) 00:09:13.655 10687.409 - 10737.822: 96.7044% ( 6) 00:09:13.655 10737.822 - 10788.234: 96.7430% ( 7) 00:09:13.655 10788.234 - 10838.646: 96.8035% ( 11) 00:09:13.655 10838.646 - 10889.058: 96.8420% ( 7) 00:09:13.655 10889.058 - 10939.471: 96.8915% ( 9) 00:09:13.655 10939.471 - 10989.883: 96.9410% ( 9) 00:09:13.655 10989.883 - 11040.295: 96.9960% ( 10) 00:09:13.655 11040.295 - 11090.708: 97.0346% ( 7) 00:09:13.655 11090.708 - 11141.120: 97.0896% ( 10) 00:09:13.655 11141.120 - 11191.532: 97.1281% ( 7) 00:09:13.655 11191.532 - 11241.945: 97.1721% ( 8) 00:09:13.655 11241.945 - 11292.357: 97.2161% ( 8) 00:09:13.655 11292.357 - 11342.769: 97.2546% ( 7) 00:09:13.655 11342.769 - 11393.182: 97.2931% ( 7) 00:09:13.655 11393.182 - 11443.594: 97.3316% ( 7) 00:09:13.655 11443.594 - 11494.006: 97.3702% ( 7) 00:09:13.655 11494.006 - 11544.418: 97.4032% ( 6) 00:09:13.655 11544.418 - 11594.831: 97.4417% ( 7) 00:09:13.655 11594.831 - 11645.243: 97.4857% ( 8) 00:09:13.655 11645.243 - 11695.655: 97.5187% ( 6) 00:09:13.655 11695.655 - 11746.068: 97.5517% ( 6) 00:09:13.655 11746.068 - 11796.480: 97.6012% ( 9) 00:09:13.655 11796.480 - 11846.892: 97.6397% ( 7) 00:09:13.655 11846.892 - 11897.305: 97.6783% ( 7) 00:09:13.655 11897.305 - 11947.717: 97.7168% ( 7) 00:09:13.655 11947.717 - 11998.129: 97.7663% ( 9) 00:09:13.655 11998.129 - 12048.542: 97.8048% ( 7) 00:09:13.655 12048.542 - 12098.954: 97.8488% ( 8) 00:09:13.655 12098.954 - 12149.366: 97.8873% ( 7) 00:09:13.655 12149.366 - 12199.778: 97.9313% ( 8) 00:09:13.655 12199.778 - 12250.191: 97.9699% ( 7) 00:09:13.655 12250.191 - 12300.603: 98.0139% ( 8) 00:09:13.656 12300.603 - 12351.015: 98.0579% ( 8) 00:09:13.656 12351.015 - 12401.428: 98.0909% ( 6) 00:09:13.656 12401.428 - 12451.840: 98.1349% ( 8) 00:09:13.656 12451.840 - 12502.252: 98.1734% ( 7) 00:09:13.656 12502.252 - 12552.665: 98.2174% ( 8) 00:09:13.656 12552.665 - 12603.077: 98.2504% ( 6) 00:09:13.656 12603.077 - 12653.489: 98.2779% ( 5) 00:09:13.656 12653.489 - 12703.902: 98.2945% ( 3) 00:09:13.656 12703.902 - 12754.314: 98.3000% ( 1) 00:09:13.656 12754.314 - 12804.726: 98.3110% ( 2) 00:09:13.656 12804.726 - 12855.138: 98.3220% ( 2) 00:09:13.656 12855.138 - 12905.551: 98.3330% ( 2) 00:09:13.656 12905.551 - 13006.375: 98.3880% ( 10) 00:09:13.656 13006.375 - 13107.200: 98.4375% ( 9) 00:09:13.656 13107.200 - 13208.025: 98.4980% ( 11) 00:09:13.656 13208.025 - 13308.849: 98.5640% ( 12) 00:09:13.656 13308.849 - 13409.674: 98.6191% ( 10) 00:09:13.656 13409.674 - 13510.498: 98.6796% ( 11) 00:09:13.656 13510.498 - 13611.323: 98.7401% ( 11) 00:09:13.656 13611.323 - 13712.148: 98.7951% ( 10) 00:09:13.656 13712.148 - 13812.972: 98.8556% ( 11) 00:09:13.656 13812.972 - 13913.797: 98.9162% ( 11) 00:09:13.656 13913.797 - 14014.622: 98.9712% ( 10) 00:09:13.656 14014.622 - 14115.446: 99.0317% ( 11) 00:09:13.656 14115.446 - 14216.271: 99.0867% ( 10) 00:09:13.656 14216.271 - 14317.095: 99.1252% ( 7) 00:09:13.656 14317.095 - 14417.920: 99.1637% ( 7) 00:09:13.656 14417.920 - 14518.745: 99.2022% ( 7) 00:09:13.656 14518.745 - 14619.569: 99.2408% ( 7) 00:09:13.656 14619.569 - 14720.394: 99.2793% ( 7) 00:09:13.656 14720.394 - 14821.218: 99.2958% ( 3) 00:09:13.656 26819.348 - 27020.997: 99.3178% ( 4) 00:09:13.656 27020.997 - 27222.646: 99.3563% ( 7) 00:09:13.656 27222.646 - 27424.295: 99.4003% ( 8) 00:09:13.656 27424.295 - 27625.945: 99.4388% ( 7) 00:09:13.656 27625.945 - 27827.594: 99.4828% ( 8) 00:09:13.656 27827.594 - 28029.243: 99.5268% ( 8) 00:09:13.656 28029.243 - 28230.892: 99.5654% ( 7) 00:09:13.656 28230.892 - 28432.542: 99.6094% ( 8) 00:09:13.656 28432.542 - 28634.191: 99.6479% ( 7) 00:09:13.656 28634.191 - 28835.840: 99.6919% ( 8) 00:09:13.656 28835.840 - 29037.489: 99.7304% ( 7) 00:09:13.656 29037.489 - 29239.138: 99.7744% ( 8) 00:09:13.656 29239.138 - 29440.788: 99.8129% ( 7) 00:09:13.656 29440.788 - 29642.437: 99.8570% ( 8) 00:09:13.656 29642.437 - 29844.086: 99.9010% ( 8) 00:09:13.656 29844.086 - 30045.735: 99.9395% ( 7) 00:09:13.656 30045.735 - 30247.385: 99.9835% ( 8) 00:09:13.656 30247.385 - 30449.034: 100.0000% ( 3) 00:09:13.656 00:09:13.656 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:13.656 ============================================================================== 00:09:13.656 Range in us Cumulative IO count 00:09:13.656 5494.942 - 5520.148: 0.0055% ( 1) 00:09:13.656 5520.148 - 5545.354: 0.0880% ( 15) 00:09:13.656 5545.354 - 5570.560: 0.4016% ( 57) 00:09:13.656 5570.560 - 5595.766: 0.7372% ( 61) 00:09:13.656 5595.766 - 5620.972: 1.2434% ( 92) 00:09:13.656 5620.972 - 5646.178: 2.0577% ( 148) 00:09:13.656 5646.178 - 5671.385: 2.9214% ( 157) 00:09:13.656 5671.385 - 5696.591: 3.8182% ( 163) 00:09:13.656 5696.591 - 5721.797: 4.7095% ( 162) 00:09:13.656 5721.797 - 5747.003: 5.6448% ( 170) 00:09:13.656 5747.003 - 5772.209: 6.5636% ( 167) 00:09:13.656 5772.209 - 5797.415: 7.5209% ( 174) 00:09:13.656 5797.415 - 5822.622: 8.6983% ( 214) 00:09:13.656 5822.622 - 5847.828: 9.7546% ( 192) 00:09:13.656 5847.828 - 5873.034: 10.8550% ( 200) 00:09:13.656 5873.034 - 5898.240: 11.9278% ( 195) 00:09:13.656 5898.240 - 5923.446: 12.9897% ( 193) 00:09:13.656 5923.446 - 5948.652: 14.2276% ( 225) 00:09:13.656 5948.652 - 5973.858: 15.3004% ( 195) 00:09:13.656 5973.858 - 5999.065: 16.4393% ( 207) 00:09:13.656 5999.065 - 6024.271: 17.4736% ( 188) 00:09:13.656 6024.271 - 6049.477: 18.7115% ( 225) 00:09:13.656 6049.477 - 6074.683: 19.9274% ( 221) 00:09:13.656 6074.683 - 6099.889: 21.1158% ( 216) 00:09:13.656 6099.889 - 6125.095: 22.2601% ( 208) 00:09:13.656 6125.095 - 6150.302: 23.4100% ( 209) 00:09:13.656 6150.302 - 6175.508: 24.5158% ( 201) 00:09:13.656 6175.508 - 6200.714: 25.7923% ( 232) 00:09:13.656 6200.714 - 6225.920: 27.0797% ( 234) 00:09:13.656 6225.920 - 6251.126: 28.3341% ( 228) 00:09:13.656 6251.126 - 6276.332: 29.5390% ( 219) 00:09:13.656 6276.332 - 6301.538: 30.6888% ( 209) 00:09:13.656 6301.538 - 6326.745: 31.8882% ( 218) 00:09:13.656 6326.745 - 6351.951: 33.1096% ( 222) 00:09:13.656 6351.951 - 6377.157: 34.2650% ( 210) 00:09:13.656 6377.157 - 6402.363: 35.4588% ( 217) 00:09:13.656 6402.363 - 6427.569: 36.6967% ( 225) 00:09:13.656 6427.569 - 6452.775: 37.9291% ( 224) 00:09:13.656 6452.775 - 6503.188: 40.3554% ( 441) 00:09:13.656 6503.188 - 6553.600: 42.7982% ( 444) 00:09:13.656 6553.600 - 6604.012: 45.2355% ( 443) 00:09:13.656 6604.012 - 6654.425: 47.7058% ( 449) 00:09:13.656 6654.425 - 6704.837: 50.1871% ( 451) 00:09:13.656 6704.837 - 6755.249: 52.5968% ( 438) 00:09:13.656 6755.249 - 6805.662: 55.0671% ( 449) 00:09:13.656 6805.662 - 6856.074: 57.5044% ( 443) 00:09:13.656 6856.074 - 6906.486: 59.9747% ( 449) 00:09:13.656 6906.486 - 6956.898: 62.4670% ( 453) 00:09:13.656 6956.898 - 7007.311: 64.9098% ( 444) 00:09:13.656 7007.311 - 7057.723: 67.3801% ( 449) 00:09:13.656 7057.723 - 7108.135: 69.8283% ( 445) 00:09:13.656 7108.135 - 7158.548: 72.3261% ( 454) 00:09:13.656 7158.548 - 7208.960: 74.7964% ( 449) 00:09:13.656 7208.960 - 7259.372: 77.2227% ( 441) 00:09:13.656 7259.372 - 7309.785: 79.6985% ( 450) 00:09:13.656 7309.785 - 7360.197: 82.0918% ( 435) 00:09:13.656 7360.197 - 7410.609: 84.1549% ( 375) 00:09:13.656 7410.609 - 7461.022: 85.8770% ( 313) 00:09:13.656 7461.022 - 7511.434: 87.2579% ( 251) 00:09:13.656 7511.434 - 7561.846: 88.1437% ( 161) 00:09:13.656 7561.846 - 7612.258: 88.7819% ( 116) 00:09:13.656 7612.258 - 7662.671: 89.2661% ( 88) 00:09:13.656 7662.671 - 7713.083: 89.7117% ( 81) 00:09:13.656 7713.083 - 7763.495: 90.1243% ( 75) 00:09:13.656 7763.495 - 7813.908: 90.5975% ( 86) 00:09:13.656 7813.908 - 7864.320: 90.9056% ( 56) 00:09:13.656 7864.320 - 7914.732: 91.2082% ( 55) 00:09:13.656 7914.732 - 7965.145: 91.4943% ( 52) 00:09:13.656 7965.145 - 8015.557: 91.7364% ( 44) 00:09:13.656 8015.557 - 8065.969: 91.9179% ( 33) 00:09:13.656 8065.969 - 8116.382: 92.0885% ( 31) 00:09:13.656 8116.382 - 8166.794: 92.2370% ( 27) 00:09:13.656 8166.794 - 8217.206: 92.3856% ( 27) 00:09:13.656 8217.206 - 8267.618: 92.5341% ( 27) 00:09:13.656 8267.618 - 8318.031: 92.6937% ( 29) 00:09:13.656 8318.031 - 8368.443: 92.8477% ( 28) 00:09:13.656 8368.443 - 8418.855: 93.0073% ( 29) 00:09:13.656 8418.855 - 8469.268: 93.1668% ( 29) 00:09:13.656 8469.268 - 8519.680: 93.3154% ( 27) 00:09:13.656 8519.680 - 8570.092: 93.5244% ( 38) 00:09:13.656 8570.092 - 8620.505: 93.6950% ( 31) 00:09:13.656 8620.505 - 8670.917: 93.8600% ( 30) 00:09:13.656 8670.917 - 8721.329: 94.0306% ( 31) 00:09:13.656 8721.329 - 8771.742: 94.2176% ( 34) 00:09:13.656 8771.742 - 8822.154: 94.3662% ( 27) 00:09:13.656 8822.154 - 8872.566: 94.5257% ( 29) 00:09:13.656 8872.566 - 8922.978: 94.6743% ( 27) 00:09:13.656 8922.978 - 8973.391: 94.8008% ( 23) 00:09:13.656 8973.391 - 9023.803: 94.9384% ( 25) 00:09:13.656 9023.803 - 9074.215: 95.0924% ( 28) 00:09:13.656 9074.215 - 9124.628: 95.2190% ( 23) 00:09:13.656 9124.628 - 9175.040: 95.3235% ( 19) 00:09:13.656 9175.040 - 9225.452: 95.4555% ( 24) 00:09:13.656 9225.452 - 9275.865: 95.5546% ( 18) 00:09:13.656 9275.865 - 9326.277: 95.6811% ( 23) 00:09:13.656 9326.277 - 9376.689: 95.7857% ( 19) 00:09:13.656 9376.689 - 9427.102: 95.8847% ( 18) 00:09:13.656 9427.102 - 9477.514: 95.9782% ( 17) 00:09:13.656 9477.514 - 9527.926: 96.0772% ( 18) 00:09:13.656 9527.926 - 9578.338: 96.1488% ( 13) 00:09:13.656 9578.338 - 9628.751: 96.2038% ( 10) 00:09:13.656 9628.751 - 9679.163: 96.2258% ( 4) 00:09:13.656 9679.163 - 9729.575: 96.2423% ( 3) 00:09:13.656 9729.575 - 9779.988: 96.2643% ( 4) 00:09:13.656 9779.988 - 9830.400: 96.2808% ( 3) 00:09:13.656 9830.400 - 9880.812: 96.3028% ( 4) 00:09:13.656 9880.812 - 9931.225: 96.3193% ( 3) 00:09:13.656 9931.225 - 9981.637: 96.3413% ( 4) 00:09:13.656 9981.637 - 10032.049: 96.3578% ( 3) 00:09:13.656 10032.049 - 10082.462: 96.3798% ( 4) 00:09:13.656 10082.462 - 10132.874: 96.3963% ( 3) 00:09:13.656 10132.874 - 10183.286: 96.4184% ( 4) 00:09:13.656 10183.286 - 10233.698: 96.4349% ( 3) 00:09:13.656 10233.698 - 10284.111: 96.4569% ( 4) 00:09:13.656 10284.111 - 10334.523: 96.4734% ( 3) 00:09:13.656 10334.523 - 10384.935: 96.4789% ( 1) 00:09:13.656 10435.348 - 10485.760: 96.4844% ( 1) 00:09:13.656 10485.760 - 10536.172: 96.5064% ( 4) 00:09:13.656 10536.172 - 10586.585: 96.5284% ( 4) 00:09:13.656 10586.585 - 10636.997: 96.5504% ( 4) 00:09:13.657 10636.997 - 10687.409: 96.5669% ( 3) 00:09:13.657 10687.409 - 10737.822: 96.5834% ( 3) 00:09:13.657 10737.822 - 10788.234: 96.5999% ( 3) 00:09:13.657 10788.234 - 10838.646: 96.6219% ( 4) 00:09:13.657 10838.646 - 10889.058: 96.6439% ( 4) 00:09:13.657 10889.058 - 10939.471: 96.6714% ( 5) 00:09:13.657 10939.471 - 10989.883: 96.6989% ( 5) 00:09:13.657 10989.883 - 11040.295: 96.7320% ( 6) 00:09:13.657 11040.295 - 11090.708: 96.7705% ( 7) 00:09:13.657 11090.708 - 11141.120: 96.8090% ( 7) 00:09:13.657 11141.120 - 11191.532: 96.8475% ( 7) 00:09:13.657 11191.532 - 11241.945: 96.8860% ( 7) 00:09:13.657 11241.945 - 11292.357: 96.9245% ( 7) 00:09:13.657 11292.357 - 11342.769: 96.9575% ( 6) 00:09:13.657 11342.769 - 11393.182: 96.9960% ( 7) 00:09:13.657 11393.182 - 11443.594: 97.0346% ( 7) 00:09:13.657 11443.594 - 11494.006: 97.0676% ( 6) 00:09:13.657 11494.006 - 11544.418: 97.1061% ( 7) 00:09:13.657 11544.418 - 11594.831: 97.1391% ( 6) 00:09:13.657 11594.831 - 11645.243: 97.1776% ( 7) 00:09:13.657 11645.243 - 11695.655: 97.2161% ( 7) 00:09:13.657 11695.655 - 11746.068: 97.2546% ( 7) 00:09:13.657 11746.068 - 11796.480: 97.2931% ( 7) 00:09:13.657 11796.480 - 11846.892: 97.3261% ( 6) 00:09:13.657 11846.892 - 11897.305: 97.3647% ( 7) 00:09:13.657 11897.305 - 11947.717: 97.4087% ( 8) 00:09:13.657 11947.717 - 11998.129: 97.4417% ( 6) 00:09:13.657 11998.129 - 12048.542: 97.4747% ( 6) 00:09:13.657 12048.542 - 12098.954: 97.5132% ( 7) 00:09:13.657 12098.954 - 12149.366: 97.5517% ( 7) 00:09:13.657 12149.366 - 12199.778: 97.5902% ( 7) 00:09:13.657 12199.778 - 12250.191: 97.6287% ( 7) 00:09:13.657 12250.191 - 12300.603: 97.6618% ( 6) 00:09:13.657 12300.603 - 12351.015: 97.7003% ( 7) 00:09:13.657 12351.015 - 12401.428: 97.7388% ( 7) 00:09:13.657 12401.428 - 12451.840: 97.7938% ( 10) 00:09:13.657 12451.840 - 12502.252: 97.8268% ( 6) 00:09:13.657 12502.252 - 12552.665: 97.8598% ( 6) 00:09:13.657 12552.665 - 12603.077: 97.8873% ( 5) 00:09:13.657 12603.077 - 12653.489: 97.9203% ( 6) 00:09:13.657 12653.489 - 12703.902: 97.9478% ( 5) 00:09:13.657 12703.902 - 12754.314: 97.9699% ( 4) 00:09:13.657 12754.314 - 12804.726: 98.0029% ( 6) 00:09:13.657 12804.726 - 12855.138: 98.0359% ( 6) 00:09:13.657 12855.138 - 12905.551: 98.0689% ( 6) 00:09:13.657 12905.551 - 13006.375: 98.1349% ( 12) 00:09:13.657 13006.375 - 13107.200: 98.1899% ( 10) 00:09:13.657 13107.200 - 13208.025: 98.2559% ( 12) 00:09:13.657 13208.025 - 13308.849: 98.3220% ( 12) 00:09:13.657 13308.849 - 13409.674: 98.3825% ( 11) 00:09:13.657 13409.674 - 13510.498: 98.4430% ( 11) 00:09:13.657 13510.498 - 13611.323: 98.5090% ( 12) 00:09:13.657 13611.323 - 13712.148: 98.5640% ( 10) 00:09:13.657 13712.148 - 13812.972: 98.6301% ( 12) 00:09:13.657 13812.972 - 13913.797: 98.7126% ( 15) 00:09:13.657 13913.797 - 14014.622: 98.7896% ( 14) 00:09:13.657 14014.622 - 14115.446: 98.8666% ( 14) 00:09:13.657 14115.446 - 14216.271: 98.9107% ( 8) 00:09:13.657 14216.271 - 14317.095: 98.9492% ( 7) 00:09:13.657 14317.095 - 14417.920: 98.9877% ( 7) 00:09:13.657 14417.920 - 14518.745: 99.0317% ( 8) 00:09:13.657 14518.745 - 14619.569: 99.0647% ( 6) 00:09:13.657 14619.569 - 14720.394: 99.0812% ( 3) 00:09:13.657 14720.394 - 14821.218: 99.0977% ( 3) 00:09:13.657 14821.218 - 14922.043: 99.1197% ( 4) 00:09:13.657 14922.043 - 15022.868: 99.1417% ( 4) 00:09:13.657 15022.868 - 15123.692: 99.1527% ( 2) 00:09:13.657 15123.692 - 15224.517: 99.1747% ( 4) 00:09:13.657 15224.517 - 15325.342: 99.1912% ( 3) 00:09:13.657 15325.342 - 15426.166: 99.2298% ( 7) 00:09:13.657 15426.166 - 15526.991: 99.2628% ( 6) 00:09:13.657 15526.991 - 15627.815: 99.2958% ( 6) 00:09:13.657 25508.628 - 25609.452: 99.3178% ( 4) 00:09:13.657 25609.452 - 25710.277: 99.3398% ( 4) 00:09:13.657 25710.277 - 25811.102: 99.3563% ( 3) 00:09:13.657 25811.102 - 26012.751: 99.4003% ( 8) 00:09:13.657 26012.751 - 26214.400: 99.4443% ( 8) 00:09:13.657 26214.400 - 26416.049: 99.4828% ( 7) 00:09:13.657 26416.049 - 26617.698: 99.5268% ( 8) 00:09:13.657 26617.698 - 26819.348: 99.5654% ( 7) 00:09:13.657 26819.348 - 27020.997: 99.6094% ( 8) 00:09:13.657 27020.997 - 27222.646: 99.6479% ( 7) 00:09:13.657 27222.646 - 27424.295: 99.6919% ( 8) 00:09:13.657 27424.295 - 27625.945: 99.7359% ( 8) 00:09:13.657 27625.945 - 27827.594: 99.7744% ( 7) 00:09:13.657 27827.594 - 28029.243: 99.8184% ( 8) 00:09:13.657 28029.243 - 28230.892: 99.8625% ( 8) 00:09:13.657 28230.892 - 28432.542: 99.9010% ( 7) 00:09:13.657 28432.542 - 28634.191: 99.9450% ( 8) 00:09:13.657 28634.191 - 28835.840: 99.9890% ( 8) 00:09:13.657 28835.840 - 29037.489: 100.0000% ( 2) 00:09:13.657 00:09:13.657 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:13.657 ============================================================================== 00:09:13.657 Range in us Cumulative IO count 00:09:13.657 5469.735 - 5494.942: 0.0110% ( 2) 00:09:13.657 5494.942 - 5520.148: 0.0605% ( 9) 00:09:13.657 5520.148 - 5545.354: 0.1981% ( 25) 00:09:13.657 5545.354 - 5570.560: 0.4071% ( 38) 00:09:13.657 5570.560 - 5595.766: 0.8638% ( 83) 00:09:13.657 5595.766 - 5620.972: 1.3424% ( 87) 00:09:13.657 5620.972 - 5646.178: 1.9916% ( 118) 00:09:13.657 5646.178 - 5671.385: 2.8059% ( 148) 00:09:13.657 5671.385 - 5696.591: 3.8017% ( 181) 00:09:13.657 5696.591 - 5721.797: 4.7700% ( 176) 00:09:13.657 5721.797 - 5747.003: 5.7383% ( 176) 00:09:13.657 5747.003 - 5772.209: 6.6846% ( 172) 00:09:13.657 5772.209 - 5797.415: 7.9225% ( 225) 00:09:13.657 5797.415 - 5822.622: 8.9844% ( 193) 00:09:13.657 5822.622 - 5847.828: 10.0077% ( 186) 00:09:13.657 5847.828 - 5873.034: 11.0310% ( 186) 00:09:13.657 5873.034 - 5898.240: 12.1479% ( 203) 00:09:13.657 5898.240 - 5923.446: 13.3143% ( 212) 00:09:13.657 5923.446 - 5948.652: 14.3706% ( 192) 00:09:13.657 5948.652 - 5973.858: 15.4654% ( 199) 00:09:13.657 5973.858 - 5999.065: 16.6153% ( 209) 00:09:13.657 5999.065 - 6024.271: 17.7267% ( 202) 00:09:13.657 6024.271 - 6049.477: 18.8765% ( 209) 00:09:13.657 6049.477 - 6074.683: 20.0869% ( 220) 00:09:13.657 6074.683 - 6099.889: 21.2753% ( 216) 00:09:13.657 6099.889 - 6125.095: 22.4032% ( 205) 00:09:13.657 6125.095 - 6150.302: 23.6466% ( 226) 00:09:13.657 6150.302 - 6175.508: 24.8790% ( 224) 00:09:13.657 6175.508 - 6200.714: 26.1059% ( 223) 00:09:13.657 6200.714 - 6225.920: 27.3162% ( 220) 00:09:13.657 6225.920 - 6251.126: 28.4991% ( 215) 00:09:13.657 6251.126 - 6276.332: 29.6820% ( 215) 00:09:13.657 6276.332 - 6301.538: 30.9089% ( 223) 00:09:13.657 6301.538 - 6326.745: 32.1303% ( 222) 00:09:13.657 6326.745 - 6351.951: 33.3022% ( 213) 00:09:13.657 6351.951 - 6377.157: 34.5070% ( 219) 00:09:13.657 6377.157 - 6402.363: 35.7504% ( 226) 00:09:13.657 6402.363 - 6427.569: 36.9223% ( 213) 00:09:13.657 6427.569 - 6452.775: 38.1162% ( 217) 00:09:13.657 6452.775 - 6503.188: 40.5810% ( 448) 00:09:13.657 6503.188 - 6553.600: 42.9798% ( 436) 00:09:13.657 6553.600 - 6604.012: 45.3730% ( 435) 00:09:13.657 6604.012 - 6654.425: 47.8323% ( 447) 00:09:13.657 6654.425 - 6704.837: 50.2916% ( 447) 00:09:13.657 6704.837 - 6755.249: 52.7124% ( 440) 00:09:13.657 6755.249 - 6805.662: 55.1221% ( 438) 00:09:13.657 6805.662 - 6856.074: 57.5649% ( 444) 00:09:13.657 6856.074 - 6906.486: 59.9967% ( 442) 00:09:13.657 6906.486 - 6956.898: 62.4945% ( 454) 00:09:13.657 6956.898 - 7007.311: 64.9483% ( 446) 00:09:13.657 7007.311 - 7057.723: 67.4131% ( 448) 00:09:13.657 7057.723 - 7108.135: 69.8889% ( 450) 00:09:13.657 7108.135 - 7158.548: 72.3647% ( 450) 00:09:13.657 7158.548 - 7208.960: 74.9065% ( 462) 00:09:13.657 7208.960 - 7259.372: 77.4043% ( 454) 00:09:13.657 7259.372 - 7309.785: 79.8746% ( 449) 00:09:13.657 7309.785 - 7360.197: 82.2788% ( 437) 00:09:13.657 7360.197 - 7410.609: 84.3695% ( 380) 00:09:13.657 7410.609 - 7461.022: 86.1081% ( 316) 00:09:13.657 7461.022 - 7511.434: 87.4780% ( 249) 00:09:13.657 7511.434 - 7561.846: 88.4078% ( 169) 00:09:13.657 7561.846 - 7612.258: 89.0460% ( 116) 00:09:13.657 7612.258 - 7662.671: 89.6127% ( 103) 00:09:13.657 7662.671 - 7713.083: 90.0473% ( 79) 00:09:13.657 7713.083 - 7763.495: 90.3884% ( 62) 00:09:13.658 7763.495 - 7813.908: 90.6965% ( 56) 00:09:13.658 7813.908 - 7864.320: 91.0101% ( 57) 00:09:13.658 7864.320 - 7914.732: 91.2852% ( 50) 00:09:13.658 7914.732 - 7965.145: 91.5053% ( 40) 00:09:13.658 7965.145 - 8015.557: 91.6648% ( 29) 00:09:13.658 8015.557 - 8065.969: 91.8024% ( 25) 00:09:13.658 8065.969 - 8116.382: 91.9014% ( 18) 00:09:13.658 8116.382 - 8166.794: 92.0279% ( 23) 00:09:13.658 8166.794 - 8217.206: 92.1325% ( 19) 00:09:13.658 8217.206 - 8267.618: 92.2755% ( 26) 00:09:13.658 8267.618 - 8318.031: 92.4076% ( 24) 00:09:13.658 8318.031 - 8368.443: 92.5451% ( 25) 00:09:13.658 8368.443 - 8418.855: 92.6772% ( 24) 00:09:13.658 8418.855 - 8469.268: 92.8037% ( 23) 00:09:13.658 8469.268 - 8519.680: 92.9412% ( 25) 00:09:13.658 8519.680 - 8570.092: 93.0678% ( 23) 00:09:13.658 8570.092 - 8620.505: 93.2108% ( 26) 00:09:13.658 8620.505 - 8670.917: 93.3429% ( 24) 00:09:13.658 8670.917 - 8721.329: 93.5299% ( 34) 00:09:13.658 8721.329 - 8771.742: 93.6620% ( 24) 00:09:13.658 8771.742 - 8822.154: 93.7885% ( 23) 00:09:13.658 8822.154 - 8872.566: 93.9096% ( 22) 00:09:13.658 8872.566 - 8922.978: 94.0031% ( 17) 00:09:13.658 8922.978 - 8973.391: 94.1131% ( 20) 00:09:13.658 8973.391 - 9023.803: 94.2066% ( 17) 00:09:13.658 9023.803 - 9074.215: 94.3057% ( 18) 00:09:13.658 9074.215 - 9124.628: 94.4047% ( 18) 00:09:13.658 9124.628 - 9175.040: 94.5202% ( 21) 00:09:13.658 9175.040 - 9225.452: 94.6138% ( 17) 00:09:13.658 9225.452 - 9275.865: 94.7183% ( 19) 00:09:13.658 9275.865 - 9326.277: 94.8063% ( 16) 00:09:13.658 9326.277 - 9376.689: 94.9164% ( 20) 00:09:13.658 9376.689 - 9427.102: 95.0044% ( 16) 00:09:13.658 9427.102 - 9477.514: 95.1089% ( 19) 00:09:13.658 9477.514 - 9527.926: 95.2190% ( 20) 00:09:13.658 9527.926 - 9578.338: 95.3345% ( 21) 00:09:13.658 9578.338 - 9628.751: 95.4445% ( 20) 00:09:13.658 9628.751 - 9679.163: 95.5381% ( 17) 00:09:13.658 9679.163 - 9729.575: 95.6151% ( 14) 00:09:13.658 9729.575 - 9779.988: 95.6921% ( 14) 00:09:13.658 9779.988 - 9830.400: 95.7691% ( 14) 00:09:13.658 9830.400 - 9880.812: 95.8462% ( 14) 00:09:13.658 9880.812 - 9931.225: 95.9342% ( 16) 00:09:13.658 9931.225 - 9981.637: 96.0057% ( 13) 00:09:13.658 9981.637 - 10032.049: 96.0993% ( 17) 00:09:13.658 10032.049 - 10082.462: 96.1983% ( 18) 00:09:13.658 10082.462 - 10132.874: 96.2918% ( 17) 00:09:13.658 10132.874 - 10183.286: 96.3853% ( 17) 00:09:13.658 10183.286 - 10233.698: 96.4404% ( 10) 00:09:13.658 10233.698 - 10284.111: 96.5009% ( 11) 00:09:13.658 10284.111 - 10334.523: 96.5449% ( 8) 00:09:13.658 10334.523 - 10384.935: 96.5999% ( 10) 00:09:13.658 10384.935 - 10435.348: 96.6604% ( 11) 00:09:13.658 10435.348 - 10485.760: 96.7044% ( 8) 00:09:13.658 10485.760 - 10536.172: 96.7430% ( 7) 00:09:13.658 10536.172 - 10586.585: 96.7815% ( 7) 00:09:13.658 10586.585 - 10636.997: 96.8255% ( 8) 00:09:13.658 10636.997 - 10687.409: 96.8585% ( 6) 00:09:13.658 10687.409 - 10737.822: 96.8915% ( 6) 00:09:13.658 10737.822 - 10788.234: 96.9245% ( 6) 00:09:13.658 10788.234 - 10838.646: 96.9630% ( 7) 00:09:13.658 10838.646 - 10889.058: 97.0015% ( 7) 00:09:13.658 10889.058 - 10939.471: 97.0401% ( 7) 00:09:13.658 10939.471 - 10989.883: 97.0786% ( 7) 00:09:13.658 10989.883 - 11040.295: 97.1171% ( 7) 00:09:13.658 11040.295 - 11090.708: 97.1501% ( 6) 00:09:13.658 11090.708 - 11141.120: 97.1886% ( 7) 00:09:13.658 11141.120 - 11191.532: 97.2271% ( 7) 00:09:13.658 11191.532 - 11241.945: 97.2656% ( 7) 00:09:13.658 11241.945 - 11292.357: 97.2986% ( 6) 00:09:13.658 11292.357 - 11342.769: 97.3261% ( 5) 00:09:13.658 11342.769 - 11393.182: 97.3537% ( 5) 00:09:13.658 11393.182 - 11443.594: 97.3812% ( 5) 00:09:13.658 11443.594 - 11494.006: 97.4087% ( 5) 00:09:13.658 11494.006 - 11544.418: 97.4362% ( 5) 00:09:13.658 11544.418 - 11594.831: 97.4637% ( 5) 00:09:13.658 11594.831 - 11645.243: 97.4912% ( 5) 00:09:13.658 11645.243 - 11695.655: 97.5132% ( 4) 00:09:13.658 11695.655 - 11746.068: 97.5352% ( 4) 00:09:13.658 11746.068 - 11796.480: 97.5627% ( 5) 00:09:13.658 11796.480 - 11846.892: 97.5847% ( 4) 00:09:13.658 11846.892 - 11897.305: 97.6122% ( 5) 00:09:13.658 11897.305 - 11947.717: 97.6342% ( 4) 00:09:13.658 11947.717 - 11998.129: 97.6673% ( 6) 00:09:13.658 11998.129 - 12048.542: 97.6948% ( 5) 00:09:13.658 12048.542 - 12098.954: 97.7113% ( 3) 00:09:13.658 12098.954 - 12149.366: 97.7223% ( 2) 00:09:13.658 12149.366 - 12199.778: 97.7278% ( 1) 00:09:13.658 12199.778 - 12250.191: 97.7388% ( 2) 00:09:13.658 12250.191 - 12300.603: 97.7498% ( 2) 00:09:13.658 12300.603 - 12351.015: 97.7718% ( 4) 00:09:13.658 12351.015 - 12401.428: 97.8433% ( 13) 00:09:13.658 12401.428 - 12451.840: 97.8543% ( 2) 00:09:13.658 12451.840 - 12502.252: 97.8818% ( 5) 00:09:13.658 12502.252 - 12552.665: 97.9093% ( 5) 00:09:13.658 12552.665 - 12603.077: 97.9368% ( 5) 00:09:13.658 12603.077 - 12653.489: 97.9699% ( 6) 00:09:13.658 12653.489 - 12703.902: 97.9974% ( 5) 00:09:13.658 12703.902 - 12754.314: 98.0304% ( 6) 00:09:13.658 12754.314 - 12804.726: 98.0524% ( 4) 00:09:13.658 12804.726 - 12855.138: 98.0854% ( 6) 00:09:13.658 12855.138 - 12905.551: 98.1129% ( 5) 00:09:13.658 12905.551 - 13006.375: 98.1679% ( 10) 00:09:13.658 13006.375 - 13107.200: 98.2064% ( 7) 00:09:13.658 13107.200 - 13208.025: 98.2449% ( 7) 00:09:13.658 13208.025 - 13308.849: 98.2835% ( 7) 00:09:13.658 13308.849 - 13409.674: 98.3220% ( 7) 00:09:13.658 13409.674 - 13510.498: 98.3605% ( 7) 00:09:13.658 13510.498 - 13611.323: 98.3990% ( 7) 00:09:13.658 13611.323 - 13712.148: 98.4375% ( 7) 00:09:13.658 13712.148 - 13812.972: 98.4760% ( 7) 00:09:13.658 13812.972 - 13913.797: 98.6026% ( 23) 00:09:13.658 13913.797 - 14014.622: 98.6521% ( 9) 00:09:13.658 14014.622 - 14115.446: 98.7291% ( 14) 00:09:13.658 14115.446 - 14216.271: 98.7676% ( 7) 00:09:13.658 14216.271 - 14317.095: 98.8061% ( 7) 00:09:13.658 14317.095 - 14417.920: 98.8446% ( 7) 00:09:13.658 14417.920 - 14518.745: 98.8831% ( 7) 00:09:13.658 14518.745 - 14619.569: 98.9162% ( 6) 00:09:13.658 14619.569 - 14720.394: 98.9547% ( 7) 00:09:13.658 14720.394 - 14821.218: 98.9932% ( 7) 00:09:13.658 14821.218 - 14922.043: 99.0262% ( 6) 00:09:13.658 14922.043 - 15022.868: 99.0647% ( 7) 00:09:13.658 15022.868 - 15123.692: 99.1032% ( 7) 00:09:13.658 15123.692 - 15224.517: 99.1417% ( 7) 00:09:13.658 15224.517 - 15325.342: 99.1802% ( 7) 00:09:13.658 15325.342 - 15426.166: 99.2188% ( 7) 00:09:13.658 15426.166 - 15526.991: 99.2518% ( 6) 00:09:13.658 15526.991 - 15627.815: 99.2903% ( 7) 00:09:13.658 15627.815 - 15728.640: 99.2958% ( 1) 00:09:13.658 23996.258 - 24097.083: 99.3123% ( 3) 00:09:13.658 24097.083 - 24197.908: 99.3343% ( 4) 00:09:13.658 24197.908 - 24298.732: 99.3563% ( 4) 00:09:13.658 24298.732 - 24399.557: 99.3728% ( 3) 00:09:13.658 24399.557 - 24500.382: 99.3948% ( 4) 00:09:13.658 24500.382 - 24601.206: 99.4168% ( 4) 00:09:13.658 24601.206 - 24702.031: 99.4388% ( 4) 00:09:13.658 24702.031 - 24802.855: 99.4553% ( 3) 00:09:13.658 24802.855 - 24903.680: 99.4773% ( 4) 00:09:13.658 24903.680 - 25004.505: 99.4993% ( 4) 00:09:13.658 25004.505 - 25105.329: 99.5213% ( 4) 00:09:13.658 25105.329 - 25206.154: 99.5434% ( 4) 00:09:13.658 25206.154 - 25306.978: 99.5599% ( 3) 00:09:13.658 25306.978 - 25407.803: 99.5819% ( 4) 00:09:13.658 25407.803 - 25508.628: 99.6039% ( 4) 00:09:13.658 25508.628 - 25609.452: 99.6259% ( 4) 00:09:13.658 25609.452 - 25710.277: 99.6479% ( 4) 00:09:13.658 25710.277 - 25811.102: 99.6699% ( 4) 00:09:13.658 25811.102 - 26012.751: 99.7084% ( 7) 00:09:13.658 26012.751 - 26214.400: 99.7469% ( 7) 00:09:13.658 26214.400 - 26416.049: 99.7909% ( 8) 00:09:13.658 26416.049 - 26617.698: 99.8294% ( 7) 00:09:13.658 26617.698 - 26819.348: 99.8735% ( 8) 00:09:13.658 26819.348 - 27020.997: 99.9175% ( 8) 00:09:13.658 27020.997 - 27222.646: 99.9615% ( 8) 00:09:13.658 27222.646 - 27424.295: 100.0000% ( 7) 00:09:13.658 00:09:13.658 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:13.658 ============================================================================== 00:09:13.658 Range in us Cumulative IO count 00:09:13.658 5494.942 - 5520.148: 0.0546% ( 10) 00:09:13.658 5520.148 - 5545.354: 0.1420% ( 16) 00:09:13.658 5545.354 - 5570.560: 0.2950% ( 28) 00:09:13.658 5570.560 - 5595.766: 0.6447% ( 64) 00:09:13.658 5595.766 - 5620.972: 1.2784% ( 116) 00:09:13.658 5620.972 - 5646.178: 1.9722% ( 127) 00:09:13.658 5646.178 - 5671.385: 2.7699% ( 146) 00:09:13.658 5671.385 - 5696.591: 3.6331% ( 158) 00:09:13.658 5696.591 - 5721.797: 4.4526% ( 150) 00:09:13.658 5721.797 - 5747.003: 5.5398% ( 199) 00:09:13.658 5747.003 - 5772.209: 6.6215% ( 198) 00:09:13.658 5772.209 - 5797.415: 7.7415% ( 205) 00:09:13.658 5797.415 - 5822.622: 8.7085% ( 177) 00:09:13.658 5822.622 - 5847.828: 9.7957% ( 199) 00:09:13.658 5847.828 - 5873.034: 10.9156% ( 205) 00:09:13.658 5873.034 - 5898.240: 12.0138% ( 201) 00:09:13.658 5898.240 - 5923.446: 13.0900% ( 197) 00:09:13.658 5923.446 - 5948.652: 14.1772% ( 199) 00:09:13.658 5948.652 - 5973.858: 15.3628% ( 217) 00:09:13.658 5973.858 - 5999.065: 16.5264% ( 213) 00:09:13.658 5999.065 - 6024.271: 17.7065% ( 216) 00:09:13.658 6024.271 - 6049.477: 18.8811% ( 215) 00:09:13.658 6049.477 - 6074.683: 19.9301% ( 192) 00:09:13.658 6074.683 - 6099.889: 21.0938% ( 213) 00:09:13.658 6099.889 - 6125.095: 22.3230% ( 225) 00:09:13.658 6125.095 - 6150.302: 23.5358% ( 222) 00:09:13.659 6150.302 - 6175.508: 24.7487% ( 222) 00:09:13.659 6175.508 - 6200.714: 25.9451% ( 219) 00:09:13.659 6200.714 - 6225.920: 27.1744% ( 225) 00:09:13.659 6225.920 - 6251.126: 28.3654% ( 218) 00:09:13.659 6251.126 - 6276.332: 29.5618% ( 219) 00:09:13.659 6276.332 - 6301.538: 30.7802% ( 223) 00:09:13.659 6301.538 - 6326.745: 32.0149% ( 226) 00:09:13.659 6326.745 - 6351.951: 33.2113% ( 219) 00:09:13.659 6351.951 - 6377.157: 34.4788% ( 232) 00:09:13.659 6377.157 - 6402.363: 35.7408% ( 231) 00:09:13.659 6402.363 - 6427.569: 36.9427% ( 220) 00:09:13.659 6427.569 - 6452.775: 38.1611% ( 223) 00:09:13.659 6452.775 - 6503.188: 40.5758% ( 442) 00:09:13.659 6503.188 - 6553.600: 43.0671% ( 456) 00:09:13.659 6553.600 - 6604.012: 45.4709% ( 440) 00:09:13.659 6604.012 - 6654.425: 47.9458% ( 453) 00:09:13.659 6654.425 - 6704.837: 50.4097% ( 451) 00:09:13.659 6704.837 - 6755.249: 52.9174% ( 459) 00:09:13.659 6755.249 - 6805.662: 55.3322% ( 442) 00:09:13.659 6805.662 - 6856.074: 57.7961% ( 451) 00:09:13.659 6856.074 - 6906.486: 60.2710% ( 453) 00:09:13.659 6906.486 - 6956.898: 62.7295% ( 450) 00:09:13.659 6956.898 - 7007.311: 65.1879% ( 450) 00:09:13.659 7007.311 - 7057.723: 67.6355% ( 448) 00:09:13.659 7057.723 - 7108.135: 70.1158% ( 454) 00:09:13.659 7108.135 - 7158.548: 72.5634% ( 448) 00:09:13.659 7158.548 - 7208.960: 75.0492% ( 455) 00:09:13.659 7208.960 - 7259.372: 77.5022% ( 449) 00:09:13.659 7259.372 - 7309.785: 79.9716% ( 452) 00:09:13.659 7309.785 - 7360.197: 82.3153% ( 429) 00:09:13.659 7360.197 - 7410.609: 84.4296% ( 387) 00:09:13.659 7410.609 - 7461.022: 86.1943% ( 323) 00:09:13.659 7461.022 - 7511.434: 87.5273% ( 244) 00:09:13.659 7511.434 - 7561.846: 88.3632% ( 153) 00:09:13.659 7561.846 - 7612.258: 89.0297% ( 122) 00:09:13.659 7612.258 - 7662.671: 89.5651% ( 98) 00:09:13.659 7662.671 - 7713.083: 90.0076% ( 81) 00:09:13.659 7713.083 - 7763.495: 90.3628% ( 65) 00:09:13.659 7763.495 - 7813.908: 90.6851% ( 59) 00:09:13.659 7813.908 - 7864.320: 90.9583% ( 50) 00:09:13.659 7864.320 - 7914.732: 91.1986% ( 44) 00:09:13.659 7914.732 - 7965.145: 91.3899% ( 35) 00:09:13.659 7965.145 - 8015.557: 91.5428% ( 28) 00:09:13.659 8015.557 - 8065.969: 91.6958% ( 28) 00:09:13.659 8065.969 - 8116.382: 91.8215% ( 23) 00:09:13.659 8116.382 - 8166.794: 91.9253% ( 19) 00:09:13.659 8166.794 - 8217.206: 92.0236% ( 18) 00:09:13.659 8217.206 - 8267.618: 92.1165% ( 17) 00:09:13.659 8267.618 - 8318.031: 92.2148% ( 18) 00:09:13.659 8318.031 - 8368.443: 92.2968% ( 15) 00:09:13.659 8368.443 - 8418.855: 92.3842% ( 16) 00:09:13.659 8418.855 - 8469.268: 92.4661% ( 15) 00:09:13.659 8469.268 - 8519.680: 92.5535% ( 16) 00:09:13.659 8519.680 - 8570.092: 92.6410% ( 16) 00:09:13.659 8570.092 - 8620.505: 92.7284% ( 16) 00:09:13.659 8620.505 - 8670.917: 92.8103% ( 15) 00:09:13.659 8670.917 - 8721.329: 92.8977% ( 16) 00:09:13.659 8721.329 - 8771.742: 92.9906% ( 17) 00:09:13.659 8771.742 - 8822.154: 93.1217% ( 24) 00:09:13.659 8822.154 - 8872.566: 93.2528% ( 24) 00:09:13.659 8872.566 - 8922.978: 93.3676% ( 21) 00:09:13.659 8922.978 - 8973.391: 93.4659% ( 18) 00:09:13.659 8973.391 - 9023.803: 93.5642% ( 18) 00:09:13.659 9023.803 - 9074.215: 93.6681% ( 19) 00:09:13.659 9074.215 - 9124.628: 93.7773% ( 20) 00:09:13.659 9124.628 - 9175.040: 93.8866% ( 20) 00:09:13.659 9175.040 - 9225.452: 93.9904% ( 19) 00:09:13.659 9225.452 - 9275.865: 94.1106% ( 22) 00:09:13.659 9275.865 - 9326.277: 94.2144% ( 19) 00:09:13.659 9326.277 - 9376.689: 94.3127% ( 18) 00:09:13.659 9376.689 - 9427.102: 94.4111% ( 18) 00:09:13.659 9427.102 - 9477.514: 94.5312% ( 22) 00:09:13.659 9477.514 - 9527.926: 94.6351% ( 19) 00:09:13.659 9527.926 - 9578.338: 94.7443% ( 20) 00:09:13.659 9578.338 - 9628.751: 94.8645% ( 22) 00:09:13.659 9628.751 - 9679.163: 94.9956% ( 24) 00:09:13.659 9679.163 - 9729.575: 95.1267% ( 24) 00:09:13.659 9729.575 - 9779.988: 95.2251% ( 18) 00:09:13.659 9779.988 - 9830.400: 95.3344% ( 20) 00:09:13.659 9830.400 - 9880.812: 95.4382% ( 19) 00:09:13.659 9880.812 - 9931.225: 95.5474% ( 20) 00:09:13.659 9931.225 - 9981.637: 95.6512% ( 19) 00:09:13.659 9981.637 - 10032.049: 95.7714% ( 22) 00:09:13.659 10032.049 - 10082.462: 95.8752% ( 19) 00:09:13.659 10082.462 - 10132.874: 95.9845% ( 20) 00:09:13.659 10132.874 - 10183.286: 96.0774% ( 17) 00:09:13.659 10183.286 - 10233.698: 96.1702% ( 17) 00:09:13.659 10233.698 - 10284.111: 96.2631% ( 17) 00:09:13.659 10284.111 - 10334.523: 96.3615% ( 18) 00:09:13.659 10334.523 - 10384.935: 96.4543% ( 17) 00:09:13.659 10384.935 - 10435.348: 96.5527% ( 18) 00:09:13.659 10435.348 - 10485.760: 96.6346% ( 15) 00:09:13.659 10485.760 - 10536.172: 96.7275% ( 17) 00:09:13.659 10536.172 - 10586.585: 96.7985% ( 13) 00:09:13.659 10586.585 - 10636.997: 96.8695% ( 13) 00:09:13.659 10636.997 - 10687.409: 96.9460% ( 14) 00:09:13.659 10687.409 - 10737.822: 97.0061% ( 11) 00:09:13.659 10737.822 - 10788.234: 97.0662% ( 11) 00:09:13.659 10788.234 - 10838.646: 97.1208% ( 10) 00:09:13.659 10838.646 - 10889.058: 97.1482% ( 5) 00:09:13.659 10889.058 - 10939.471: 97.1864% ( 7) 00:09:13.659 10939.471 - 10989.883: 97.2247% ( 7) 00:09:13.659 10989.883 - 11040.295: 97.2629% ( 7) 00:09:13.659 11040.295 - 11090.708: 97.3011% ( 7) 00:09:13.659 11090.708 - 11141.120: 97.3394% ( 7) 00:09:13.659 11141.120 - 11191.532: 97.3722% ( 6) 00:09:13.659 11191.532 - 11241.945: 97.4104% ( 7) 00:09:13.659 11241.945 - 11292.357: 97.4486% ( 7) 00:09:13.659 11292.357 - 11342.769: 97.4814% ( 6) 00:09:13.659 11342.769 - 11393.182: 97.5033% ( 4) 00:09:13.659 11393.182 - 11443.594: 97.5306% ( 5) 00:09:13.659 11443.594 - 11494.006: 97.5579% ( 5) 00:09:13.659 11494.006 - 11544.418: 97.5907% ( 6) 00:09:13.659 11544.418 - 11594.831: 97.6180% ( 5) 00:09:13.659 11594.831 - 11645.243: 97.6399% ( 4) 00:09:13.659 11645.243 - 11695.655: 97.6453% ( 1) 00:09:13.659 11695.655 - 11746.068: 97.6562% ( 2) 00:09:13.659 11746.068 - 11796.480: 97.6672% ( 2) 00:09:13.659 11796.480 - 11846.892: 97.6781% ( 2) 00:09:13.659 11846.892 - 11897.305: 97.6890% ( 2) 00:09:13.659 11897.305 - 11947.717: 97.7000% ( 2) 00:09:13.659 11947.717 - 11998.129: 97.7054% ( 1) 00:09:13.659 11998.129 - 12048.542: 97.7163% ( 2) 00:09:13.659 12048.542 - 12098.954: 97.7273% ( 2) 00:09:13.659 12098.954 - 12149.366: 97.7382% ( 2) 00:09:13.659 12149.366 - 12199.778: 97.7491% ( 2) 00:09:13.659 12199.778 - 12250.191: 97.7655% ( 3) 00:09:13.659 12250.191 - 12300.603: 97.7764% ( 2) 00:09:13.659 12300.603 - 12351.015: 97.8147% ( 7) 00:09:13.659 12351.015 - 12401.428: 97.8475% ( 6) 00:09:13.659 12401.428 - 12451.840: 97.8912% ( 8) 00:09:13.659 12451.840 - 12502.252: 97.9349% ( 8) 00:09:13.659 12502.252 - 12552.665: 97.9677% ( 6) 00:09:13.659 12552.665 - 12603.077: 98.0114% ( 8) 00:09:13.659 12603.077 - 12653.489: 98.0496% ( 7) 00:09:13.659 12653.489 - 12703.902: 98.0933% ( 8) 00:09:13.659 12703.902 - 12754.314: 98.1206% ( 5) 00:09:13.659 12754.314 - 12804.726: 98.1589% ( 7) 00:09:13.659 12804.726 - 12855.138: 98.2026% ( 8) 00:09:13.659 12855.138 - 12905.551: 98.2408% ( 7) 00:09:13.659 12905.551 - 13006.375: 98.3118% ( 13) 00:09:13.659 13006.375 - 13107.200: 98.3665% ( 10) 00:09:13.659 13107.200 - 13208.025: 98.4266% ( 11) 00:09:13.659 13208.025 - 13308.849: 98.4867% ( 11) 00:09:13.659 13308.849 - 13409.674: 98.5468% ( 11) 00:09:13.659 13409.674 - 13510.498: 98.6014% ( 10) 00:09:13.659 13510.498 - 13611.323: 98.6615% ( 11) 00:09:13.659 13611.323 - 13712.148: 98.7161% ( 10) 00:09:13.659 13712.148 - 13812.972: 98.7762% ( 11) 00:09:13.659 13812.972 - 13913.797: 98.8309% ( 10) 00:09:13.659 13913.797 - 14014.622: 98.8855% ( 10) 00:09:13.659 14014.622 - 14115.446: 98.9347% ( 9) 00:09:13.659 14115.446 - 14216.271: 98.9784% ( 8) 00:09:13.659 14216.271 - 14317.095: 98.9948% ( 3) 00:09:13.659 14317.095 - 14417.920: 99.0111% ( 3) 00:09:13.659 14417.920 - 14518.745: 99.0275% ( 3) 00:09:13.659 14518.745 - 14619.569: 99.0494% ( 4) 00:09:13.659 14619.569 - 14720.394: 99.0658% ( 3) 00:09:13.659 14720.394 - 14821.218: 99.0876% ( 4) 00:09:13.659 14821.218 - 14922.043: 99.1040% ( 3) 00:09:13.659 14922.043 - 15022.868: 99.1259% ( 4) 00:09:13.659 15022.868 - 15123.692: 99.1423% ( 3) 00:09:13.659 15123.692 - 15224.517: 99.1587% ( 3) 00:09:13.659 15224.517 - 15325.342: 99.1860% ( 5) 00:09:13.659 15325.342 - 15426.166: 99.2078% ( 4) 00:09:13.659 15426.166 - 15526.991: 99.2351% ( 5) 00:09:13.659 15526.991 - 15627.815: 99.2679% ( 6) 00:09:13.659 15627.815 - 15728.640: 99.3062% ( 7) 00:09:13.659 15728.640 - 15829.465: 99.3444% ( 7) 00:09:13.659 15829.465 - 15930.289: 99.3717% ( 5) 00:09:13.659 15930.289 - 16031.114: 99.3936% ( 4) 00:09:13.659 16031.114 - 16131.938: 99.4100% ( 3) 00:09:13.659 16131.938 - 16232.763: 99.4318% ( 4) 00:09:13.659 16232.763 - 16333.588: 99.4537% ( 4) 00:09:13.659 16333.588 - 16434.412: 99.4755% ( 4) 00:09:13.659 16434.412 - 16535.237: 99.4974% ( 4) 00:09:13.659 16535.237 - 16636.062: 99.5192% ( 4) 00:09:13.659 16636.062 - 16736.886: 99.5356% ( 3) 00:09:13.659 16736.886 - 16837.711: 99.5575% ( 4) 00:09:13.659 16837.711 - 16938.535: 99.5793% ( 4) 00:09:13.659 16938.535 - 17039.360: 99.5957% ( 3) 00:09:13.659 17039.360 - 17140.185: 99.6176% ( 4) 00:09:13.659 17140.185 - 17241.009: 99.6394% ( 4) 00:09:13.659 17241.009 - 17341.834: 99.6558% ( 3) 00:09:13.659 17341.834 - 17442.658: 99.6777% ( 4) 00:09:13.659 17442.658 - 17543.483: 99.6995% ( 4) 00:09:13.660 17543.483 - 17644.308: 99.7214% ( 4) 00:09:13.660 17644.308 - 17745.132: 99.7432% ( 4) 00:09:13.660 17745.132 - 17845.957: 99.7651% ( 4) 00:09:13.660 17845.957 - 17946.782: 99.7869% ( 4) 00:09:13.660 17946.782 - 18047.606: 99.8088% ( 4) 00:09:13.660 18047.606 - 18148.431: 99.8306% ( 4) 00:09:13.660 18148.431 - 18249.255: 99.8470% ( 3) 00:09:13.660 18249.255 - 18350.080: 99.8689% ( 4) 00:09:13.660 18350.080 - 18450.905: 99.8907% ( 4) 00:09:13.660 18450.905 - 18551.729: 99.9126% ( 4) 00:09:13.660 18551.729 - 18652.554: 99.9290% ( 3) 00:09:13.660 18652.554 - 18753.378: 99.9508% ( 4) 00:09:13.660 18753.378 - 18854.203: 99.9727% ( 4) 00:09:13.660 18854.203 - 18955.028: 99.9945% ( 4) 00:09:13.660 18955.028 - 19055.852: 100.0000% ( 1) 00:09:13.660 00:09:13.660 14:10:11 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:09:15.040 Initializing NVMe Controllers 00:09:15.040 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:15.040 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:15.040 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:15.040 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:15.040 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:09:15.040 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:09:15.040 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:09:15.040 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:09:15.040 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:09:15.040 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:09:15.040 Initialization complete. Launching workers. 00:09:15.040 ======================================================== 00:09:15.040 Latency(us) 00:09:15.040 Device Information : IOPS MiB/s Average min max 00:09:15.040 PCIE (0000:00:06.0) NSID 1 from core 0: 17816.54 208.79 7181.19 5009.91 27152.63 00:09:15.040 PCIE (0000:00:07.0) NSID 1 from core 0: 17816.54 208.79 7177.38 5481.25 25189.39 00:09:15.040 PCIE (0000:00:09.0) NSID 1 from core 0: 17816.54 208.79 7170.89 5362.50 24486.43 00:09:15.040 PCIE (0000:00:08.0) NSID 1 from core 0: 17816.54 208.79 7164.46 5315.98 23080.83 00:09:15.040 PCIE (0000:00:08.0) NSID 2 from core 0: 17816.54 208.79 7158.16 5504.98 21455.33 00:09:15.040 PCIE (0000:00:08.0) NSID 3 from core 0: 17816.54 208.79 7151.72 5383.03 19945.33 00:09:15.040 ======================================================== 00:09:15.040 Total : 106899.24 1252.73 7167.30 5009.91 27152.63 00:09:15.040 00:09:15.040 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:15.040 ================================================================================= 00:09:15.040 1.00000% : 5646.178us 00:09:15.040 10.00000% : 5999.065us 00:09:15.040 25.00000% : 6251.126us 00:09:15.040 50.00000% : 6755.249us 00:09:15.040 75.00000% : 7309.785us 00:09:15.040 90.00000% : 7965.145us 00:09:15.040 95.00000% : 11544.418us 00:09:15.040 98.00000% : 13409.674us 00:09:15.040 99.00000% : 14317.095us 00:09:15.040 99.50000% : 24903.680us 00:09:15.040 99.90000% : 26214.400us 00:09:15.040 99.99000% : 27020.997us 00:09:15.040 99.99900% : 27222.646us 00:09:15.040 99.99990% : 27222.646us 00:09:15.040 99.99999% : 27222.646us 00:09:15.040 00:09:15.040 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:15.040 ================================================================================= 00:09:15.040 1.00000% : 5948.652us 00:09:15.040 10.00000% : 6225.920us 00:09:15.040 25.00000% : 6452.775us 00:09:15.040 50.00000% : 6755.249us 00:09:15.040 75.00000% : 7057.723us 00:09:15.040 90.00000% : 7662.671us 00:09:15.040 95.00000% : 10838.646us 00:09:15.040 98.00000% : 13812.972us 00:09:15.040 99.00000% : 14518.745us 00:09:15.040 99.50000% : 23895.434us 00:09:15.040 99.90000% : 24702.031us 00:09:15.040 99.99000% : 25206.154us 00:09:15.040 99.99900% : 25206.154us 00:09:15.040 99.99990% : 25206.154us 00:09:15.040 99.99999% : 25206.154us 00:09:15.040 00:09:15.040 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:15.040 ================================================================================= 00:09:15.040 1.00000% : 5898.240us 00:09:15.040 10.00000% : 6225.920us 00:09:15.040 25.00000% : 6452.775us 00:09:15.040 50.00000% : 6704.837us 00:09:15.040 75.00000% : 7108.135us 00:09:15.040 90.00000% : 7813.908us 00:09:15.040 95.00000% : 10838.646us 00:09:15.040 98.00000% : 13308.849us 00:09:15.040 99.00000% : 14922.043us 00:09:15.040 99.50000% : 21979.766us 00:09:15.040 99.90000% : 24097.083us 00:09:15.040 99.99000% : 24500.382us 00:09:15.040 99.99900% : 24500.382us 00:09:15.040 99.99990% : 24500.382us 00:09:15.040 99.99999% : 24500.382us 00:09:15.040 00:09:15.040 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:15.040 ================================================================================= 00:09:15.040 1.00000% : 5898.240us 00:09:15.040 10.00000% : 6200.714us 00:09:15.040 25.00000% : 6427.569us 00:09:15.040 50.00000% : 6704.837us 00:09:15.040 75.00000% : 7108.135us 00:09:15.040 90.00000% : 7662.671us 00:09:15.040 95.00000% : 10889.058us 00:09:15.040 98.00000% : 14115.446us 00:09:15.040 99.00000% : 15325.342us 00:09:15.040 99.50000% : 20669.046us 00:09:15.040 99.90000% : 22685.538us 00:09:15.040 99.99000% : 23088.837us 00:09:15.040 99.99900% : 23088.837us 00:09:15.040 99.99990% : 23088.837us 00:09:15.040 99.99999% : 23088.837us 00:09:15.040 00:09:15.040 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:15.040 ================================================================================= 00:09:15.040 1.00000% : 5898.240us 00:09:15.040 10.00000% : 6200.714us 00:09:15.040 25.00000% : 6427.569us 00:09:15.040 50.00000% : 6704.837us 00:09:15.040 75.00000% : 7108.135us 00:09:15.040 90.00000% : 7713.083us 00:09:15.040 95.00000% : 11443.594us 00:09:15.040 98.00000% : 13812.972us 00:09:15.040 99.00000% : 14821.218us 00:09:15.040 99.50000% : 19459.151us 00:09:15.040 99.90000% : 20971.520us 00:09:15.040 99.99000% : 21475.643us 00:09:15.040 99.99900% : 21475.643us 00:09:15.040 99.99990% : 21475.643us 00:09:15.040 99.99999% : 21475.643us 00:09:15.040 00:09:15.040 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:15.040 ================================================================================= 00:09:15.041 1.00000% : 5898.240us 00:09:15.041 10.00000% : 6225.920us 00:09:15.041 25.00000% : 6427.569us 00:09:15.041 50.00000% : 6704.837us 00:09:15.041 75.00000% : 7057.723us 00:09:15.041 90.00000% : 7662.671us 00:09:15.041 95.00000% : 12048.542us 00:09:15.041 98.00000% : 13308.849us 00:09:15.041 99.00000% : 14317.095us 00:09:15.041 99.50000% : 18450.905us 00:09:15.041 99.90000% : 19459.151us 00:09:15.041 99.99000% : 19963.274us 00:09:15.041 99.99900% : 19963.274us 00:09:15.041 99.99990% : 19963.274us 00:09:15.041 99.99999% : 19963.274us 00:09:15.041 00:09:15.041 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:15.041 ============================================================================== 00:09:15.041 Range in us Cumulative IO count 00:09:15.041 4990.818 - 5016.025: 0.0112% ( 2) 00:09:15.041 5016.025 - 5041.231: 0.0167% ( 1) 00:09:15.041 5041.231 - 5066.437: 0.0223% ( 1) 00:09:15.041 5066.437 - 5091.643: 0.0335% ( 2) 00:09:15.041 5091.643 - 5116.849: 0.0446% ( 2) 00:09:15.041 5116.849 - 5142.055: 0.0614% ( 3) 00:09:15.041 5142.055 - 5167.262: 0.1228% ( 11) 00:09:15.041 5167.262 - 5192.468: 0.1395% ( 3) 00:09:15.041 5192.468 - 5217.674: 0.1674% ( 5) 00:09:15.041 5217.674 - 5242.880: 0.2009% ( 6) 00:09:15.041 5242.880 - 5268.086: 0.2288% ( 5) 00:09:15.041 5268.086 - 5293.292: 0.2511% ( 4) 00:09:15.041 5293.292 - 5318.498: 0.2790% ( 5) 00:09:15.041 5318.498 - 5343.705: 0.3013% ( 4) 00:09:15.041 5343.705 - 5368.911: 0.3125% ( 2) 00:09:15.041 5368.911 - 5394.117: 0.3460% ( 6) 00:09:15.041 5394.117 - 5419.323: 0.3795% ( 6) 00:09:15.041 5419.323 - 5444.529: 0.4129% ( 6) 00:09:15.041 5444.529 - 5469.735: 0.4353% ( 4) 00:09:15.041 5469.735 - 5494.942: 0.4855% ( 9) 00:09:15.041 5494.942 - 5520.148: 0.5190% ( 6) 00:09:15.041 5520.148 - 5545.354: 0.5692% ( 9) 00:09:15.041 5545.354 - 5570.560: 0.6752% ( 19) 00:09:15.041 5570.560 - 5595.766: 0.7980% ( 22) 00:09:15.041 5595.766 - 5620.972: 0.9766% ( 32) 00:09:15.041 5620.972 - 5646.178: 1.1384% ( 29) 00:09:15.041 5646.178 - 5671.385: 1.3002% ( 29) 00:09:15.041 5671.385 - 5696.591: 1.4342% ( 24) 00:09:15.041 5696.591 - 5721.797: 1.5737% ( 25) 00:09:15.041 5721.797 - 5747.003: 1.8248% ( 45) 00:09:15.041 5747.003 - 5772.209: 2.0982% ( 49) 00:09:15.041 5772.209 - 5797.415: 2.4163% ( 57) 00:09:15.041 5797.415 - 5822.622: 2.8404% ( 76) 00:09:15.041 5822.622 - 5847.828: 3.6217% ( 140) 00:09:15.041 5847.828 - 5873.034: 4.4085% ( 141) 00:09:15.041 5873.034 - 5898.240: 5.7031% ( 232) 00:09:15.041 5898.240 - 5923.446: 7.2321% ( 274) 00:09:15.041 5923.446 - 5948.652: 8.5993% ( 245) 00:09:15.041 5948.652 - 5973.858: 9.9609% ( 244) 00:09:15.041 5973.858 - 5999.065: 11.2109% ( 224) 00:09:15.041 5999.065 - 6024.271: 12.5223% ( 235) 00:09:15.041 6024.271 - 6049.477: 13.7779% ( 225) 00:09:15.041 6049.477 - 6074.683: 15.2232% ( 259) 00:09:15.041 6074.683 - 6099.889: 16.6741% ( 260) 00:09:15.041 6099.889 - 6125.095: 18.1194% ( 259) 00:09:15.041 6125.095 - 6150.302: 19.6931% ( 282) 00:09:15.041 6150.302 - 6175.508: 21.0547% ( 244) 00:09:15.041 6175.508 - 6200.714: 22.6172% ( 280) 00:09:15.041 6200.714 - 6225.920: 24.1127% ( 268) 00:09:15.041 6225.920 - 6251.126: 25.6752% ( 280) 00:09:15.041 6251.126 - 6276.332: 27.3326% ( 297) 00:09:15.041 6276.332 - 6301.538: 29.0179% ( 302) 00:09:15.041 6301.538 - 6326.745: 30.7310% ( 307) 00:09:15.041 6326.745 - 6351.951: 32.2266% ( 268) 00:09:15.041 6351.951 - 6377.157: 33.5268% ( 233) 00:09:15.041 6377.157 - 6402.363: 34.7879% ( 226) 00:09:15.041 6402.363 - 6427.569: 36.0379% ( 224) 00:09:15.041 6427.569 - 6452.775: 37.2154% ( 211) 00:09:15.041 6452.775 - 6503.188: 39.5815% ( 424) 00:09:15.041 6503.188 - 6553.600: 41.7690% ( 392) 00:09:15.041 6553.600 - 6604.012: 44.1295% ( 423) 00:09:15.041 6604.012 - 6654.425: 46.7801% ( 475) 00:09:15.041 6654.425 - 6704.837: 49.0681% ( 410) 00:09:15.041 6704.837 - 6755.249: 51.7467% ( 480) 00:09:15.041 6755.249 - 6805.662: 54.6540% ( 521) 00:09:15.041 6805.662 - 6856.074: 57.4888% ( 508) 00:09:15.041 6856.074 - 6906.486: 60.0279% ( 455) 00:09:15.041 6906.486 - 6956.898: 62.1819% ( 386) 00:09:15.041 6956.898 - 7007.311: 64.8493% ( 478) 00:09:15.041 7007.311 - 7057.723: 67.0982% ( 403) 00:09:15.041 7057.723 - 7108.135: 68.9788% ( 337) 00:09:15.041 7108.135 - 7158.548: 70.5804% ( 287) 00:09:15.041 7158.548 - 7208.960: 72.3158% ( 311) 00:09:15.041 7208.960 - 7259.372: 74.0737% ( 315) 00:09:15.041 7259.372 - 7309.785: 75.7868% ( 307) 00:09:15.041 7309.785 - 7360.197: 77.4777% ( 303) 00:09:15.041 7360.197 - 7410.609: 79.1127% ( 293) 00:09:15.041 7410.609 - 7461.022: 80.8092% ( 304) 00:09:15.041 7461.022 - 7511.434: 82.2377% ( 256) 00:09:15.041 7511.434 - 7561.846: 83.5435% ( 234) 00:09:15.041 7561.846 - 7612.258: 84.6987% ( 207) 00:09:15.041 7612.258 - 7662.671: 85.7031% ( 180) 00:09:15.041 7662.671 - 7713.083: 86.5179% ( 146) 00:09:15.041 7713.083 - 7763.495: 87.2712% ( 135) 00:09:15.041 7763.495 - 7813.908: 88.0469% ( 139) 00:09:15.041 7813.908 - 7864.320: 88.8616% ( 146) 00:09:15.041 7864.320 - 7914.732: 89.6708% ( 145) 00:09:15.041 7914.732 - 7965.145: 90.4018% ( 131) 00:09:15.041 7965.145 - 8015.557: 90.9263% ( 94) 00:09:15.041 8015.557 - 8065.969: 91.4007% ( 85) 00:09:15.041 8065.969 - 8116.382: 91.8080% ( 73) 00:09:15.041 8116.382 - 8166.794: 92.1708% ( 65) 00:09:15.041 8166.794 - 8217.206: 92.3438% ( 31) 00:09:15.041 8217.206 - 8267.618: 92.4442% ( 18) 00:09:15.041 8267.618 - 8318.031: 92.5167% ( 13) 00:09:15.041 8318.031 - 8368.443: 92.5614% ( 8) 00:09:15.041 8368.443 - 8418.855: 92.5837% ( 4) 00:09:15.041 8418.855 - 8469.268: 92.6116% ( 5) 00:09:15.041 8469.268 - 8519.680: 92.6395% ( 5) 00:09:15.041 8519.680 - 8570.092: 92.6618% ( 4) 00:09:15.041 8570.092 - 8620.505: 92.6842% ( 4) 00:09:15.041 8620.505 - 8670.917: 92.7121% ( 5) 00:09:15.041 8670.917 - 8721.329: 92.7400% ( 5) 00:09:15.041 8721.329 - 8771.742: 92.7679% ( 5) 00:09:15.041 8771.742 - 8822.154: 92.8013% ( 6) 00:09:15.041 8822.154 - 8872.566: 92.8292% ( 5) 00:09:15.041 8872.566 - 8922.978: 92.8571% ( 5) 00:09:15.041 8922.978 - 8973.391: 92.8850% ( 5) 00:09:15.041 8973.391 - 9023.803: 92.9297% ( 8) 00:09:15.041 9023.803 - 9074.215: 92.9799% ( 9) 00:09:15.041 9074.215 - 9124.628: 93.0246% ( 8) 00:09:15.041 9124.628 - 9175.040: 93.0859% ( 11) 00:09:15.041 9175.040 - 9225.452: 93.1362% ( 9) 00:09:15.041 9225.452 - 9275.865: 93.1808% ( 8) 00:09:15.041 9275.865 - 9326.277: 93.2366% ( 10) 00:09:15.041 9326.277 - 9376.689: 93.2645% ( 5) 00:09:15.041 9376.689 - 9427.102: 93.2812% ( 3) 00:09:15.041 9427.102 - 9477.514: 93.3092% ( 5) 00:09:15.041 9477.514 - 9527.926: 93.3371% ( 5) 00:09:15.041 9527.926 - 9578.338: 93.3538% ( 3) 00:09:15.041 9578.338 - 9628.751: 93.4208% ( 12) 00:09:15.041 9628.751 - 9679.163: 93.4654% ( 8) 00:09:15.041 9679.163 - 9729.575: 93.4989% ( 6) 00:09:15.041 9729.575 - 9779.988: 93.5379% ( 7) 00:09:15.041 9779.988 - 9830.400: 93.5938% ( 10) 00:09:15.041 9830.400 - 9880.812: 93.6384% ( 8) 00:09:15.041 9880.812 - 9931.225: 93.6663% ( 5) 00:09:15.041 9931.225 - 9981.637: 93.6775% ( 2) 00:09:15.041 9981.637 - 10032.049: 93.6942% ( 3) 00:09:15.041 10032.049 - 10082.462: 93.7054% ( 2) 00:09:15.041 10082.462 - 10132.874: 93.7333% ( 5) 00:09:15.041 10132.874 - 10183.286: 93.7556% ( 4) 00:09:15.041 10183.286 - 10233.698: 93.7891% ( 6) 00:09:15.041 10233.698 - 10284.111: 93.8170% ( 5) 00:09:15.041 10284.111 - 10334.523: 93.8616% ( 8) 00:09:15.041 10334.523 - 10384.935: 93.9062% ( 8) 00:09:15.042 10384.935 - 10435.348: 93.9788% ( 13) 00:09:15.042 10435.348 - 10485.760: 94.0123% ( 6) 00:09:15.042 10485.760 - 10536.172: 94.0458% ( 6) 00:09:15.042 10536.172 - 10586.585: 94.0792% ( 6) 00:09:15.042 10586.585 - 10636.997: 94.1183% ( 7) 00:09:15.042 10636.997 - 10687.409: 94.1574% ( 7) 00:09:15.042 10687.409 - 10737.822: 94.1964% ( 7) 00:09:15.042 10737.822 - 10788.234: 94.2355% ( 7) 00:09:15.042 10788.234 - 10838.646: 94.2913% ( 10) 00:09:15.042 10838.646 - 10889.058: 94.3583% ( 12) 00:09:15.042 10889.058 - 10939.471: 94.4085% ( 9) 00:09:15.042 10939.471 - 10989.883: 94.4531% ( 8) 00:09:15.042 10989.883 - 11040.295: 94.4922% ( 7) 00:09:15.042 11040.295 - 11090.708: 94.5368% ( 8) 00:09:15.042 11090.708 - 11141.120: 94.6038% ( 12) 00:09:15.042 11141.120 - 11191.532: 94.6596% ( 10) 00:09:15.042 11191.532 - 11241.945: 94.6987% ( 7) 00:09:15.042 11241.945 - 11292.357: 94.7489% ( 9) 00:09:15.042 11292.357 - 11342.769: 94.7879% ( 7) 00:09:15.042 11342.769 - 11393.182: 94.8438% ( 10) 00:09:15.042 11393.182 - 11443.594: 94.9330% ( 16) 00:09:15.042 11443.594 - 11494.006: 94.9833% ( 9) 00:09:15.042 11494.006 - 11544.418: 95.0781% ( 17) 00:09:15.042 11544.418 - 11594.831: 95.1339% ( 10) 00:09:15.042 11594.831 - 11645.243: 95.1897% ( 10) 00:09:15.042 11645.243 - 11695.655: 95.2511% ( 11) 00:09:15.042 11695.655 - 11746.068: 95.3013% ( 9) 00:09:15.042 11746.068 - 11796.480: 95.3571% ( 10) 00:09:15.042 11796.480 - 11846.892: 95.4185% ( 11) 00:09:15.042 11846.892 - 11897.305: 95.4688% ( 9) 00:09:15.042 11897.305 - 11947.717: 95.5301% ( 11) 00:09:15.042 11947.717 - 11998.129: 95.6417% ( 20) 00:09:15.042 11998.129 - 12048.542: 95.7310% ( 16) 00:09:15.042 12048.542 - 12098.954: 95.8482% ( 21) 00:09:15.042 12098.954 - 12149.366: 95.9542% ( 19) 00:09:15.042 12149.366 - 12199.778: 96.0491% ( 17) 00:09:15.042 12199.778 - 12250.191: 96.1272% ( 14) 00:09:15.042 12250.191 - 12300.603: 96.2165% ( 16) 00:09:15.042 12300.603 - 12351.015: 96.3058% ( 16) 00:09:15.042 12351.015 - 12401.428: 96.3895% ( 15) 00:09:15.042 12401.428 - 12451.840: 96.4844% ( 17) 00:09:15.042 12451.840 - 12502.252: 96.5458% ( 11) 00:09:15.042 12502.252 - 12552.665: 96.6239% ( 14) 00:09:15.042 12552.665 - 12603.077: 96.7020% ( 14) 00:09:15.042 12603.077 - 12653.489: 96.8025% ( 18) 00:09:15.042 12653.489 - 12703.902: 96.8973% ( 17) 00:09:15.042 12703.902 - 12754.314: 96.9754% ( 14) 00:09:15.042 12754.314 - 12804.726: 97.0759% ( 18) 00:09:15.042 12804.726 - 12855.138: 97.1596% ( 15) 00:09:15.042 12855.138 - 12905.551: 97.2321% ( 13) 00:09:15.042 12905.551 - 13006.375: 97.3940% ( 29) 00:09:15.042 13006.375 - 13107.200: 97.5614% ( 30) 00:09:15.042 13107.200 - 13208.025: 97.7288% ( 30) 00:09:15.042 13208.025 - 13308.849: 97.8850% ( 28) 00:09:15.042 13308.849 - 13409.674: 98.0469% ( 29) 00:09:15.042 13409.674 - 13510.498: 98.1975% ( 27) 00:09:15.042 13510.498 - 13611.323: 98.3259% ( 23) 00:09:15.042 13611.323 - 13712.148: 98.4542% ( 23) 00:09:15.042 13712.148 - 13812.972: 98.5770% ( 22) 00:09:15.042 13812.972 - 13913.797: 98.6998% ( 22) 00:09:15.042 13913.797 - 14014.622: 98.8225% ( 22) 00:09:15.042 14014.622 - 14115.446: 98.8895% ( 12) 00:09:15.042 14115.446 - 14216.271: 98.9397% ( 9) 00:09:15.042 14216.271 - 14317.095: 99.0011% ( 11) 00:09:15.042 14317.095 - 14417.920: 99.0569% ( 10) 00:09:15.042 14417.920 - 14518.745: 99.0792% ( 4) 00:09:15.042 14518.745 - 14619.569: 99.1016% ( 4) 00:09:15.042 14619.569 - 14720.394: 99.1350% ( 6) 00:09:15.042 14720.394 - 14821.218: 99.1629% ( 5) 00:09:15.042 14821.218 - 14922.043: 99.1853% ( 4) 00:09:15.042 14922.043 - 15022.868: 99.2132% ( 5) 00:09:15.042 15022.868 - 15123.692: 99.2411% ( 5) 00:09:15.042 15123.692 - 15224.517: 99.2634% ( 4) 00:09:15.042 15224.517 - 15325.342: 99.2857% ( 4) 00:09:15.042 24298.732 - 24399.557: 99.3304% ( 8) 00:09:15.042 24399.557 - 24500.382: 99.3750% ( 8) 00:09:15.042 24500.382 - 24601.206: 99.4085% ( 6) 00:09:15.042 24601.206 - 24702.031: 99.4475% ( 7) 00:09:15.042 24702.031 - 24802.855: 99.4754% ( 5) 00:09:15.042 24802.855 - 24903.680: 99.5257% ( 9) 00:09:15.042 24903.680 - 25004.505: 99.5536% ( 5) 00:09:15.042 25004.505 - 25105.329: 99.5871% ( 6) 00:09:15.042 25105.329 - 25206.154: 99.6652% ( 14) 00:09:15.042 25206.154 - 25306.978: 99.6875% ( 4) 00:09:15.042 25306.978 - 25407.803: 99.7098% ( 4) 00:09:15.042 25407.803 - 25508.628: 99.7433% ( 6) 00:09:15.042 25508.628 - 25609.452: 99.7824% ( 7) 00:09:15.042 25609.452 - 25710.277: 99.8326% ( 9) 00:09:15.042 25710.277 - 25811.102: 99.8717% ( 7) 00:09:15.042 25811.102 - 26012.751: 99.8940% ( 4) 00:09:15.042 26012.751 - 26214.400: 99.9051% ( 2) 00:09:15.042 26214.400 - 26416.049: 99.9330% ( 5) 00:09:15.042 26416.049 - 26617.698: 99.9609% ( 5) 00:09:15.042 26617.698 - 26819.348: 99.9777% ( 3) 00:09:15.042 26819.348 - 27020.997: 99.9944% ( 3) 00:09:15.042 27020.997 - 27222.646: 100.0000% ( 1) 00:09:15.042 00:09:15.042 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:15.042 ============================================================================== 00:09:15.042 Range in us Cumulative IO count 00:09:15.042 5469.735 - 5494.942: 0.0056% ( 1) 00:09:15.042 5545.354 - 5570.560: 0.0279% ( 4) 00:09:15.042 5595.766 - 5620.972: 0.0391% ( 2) 00:09:15.042 5620.972 - 5646.178: 0.0614% ( 4) 00:09:15.042 5646.178 - 5671.385: 0.0781% ( 3) 00:09:15.042 5671.385 - 5696.591: 0.1060% ( 5) 00:09:15.042 5696.591 - 5721.797: 0.1339% ( 5) 00:09:15.042 5721.797 - 5747.003: 0.1618% ( 5) 00:09:15.042 5747.003 - 5772.209: 0.1842% ( 4) 00:09:15.042 5772.209 - 5797.415: 0.2623% ( 14) 00:09:15.042 5797.415 - 5822.622: 0.3460% ( 15) 00:09:15.042 5822.622 - 5847.828: 0.4632% ( 21) 00:09:15.042 5847.828 - 5873.034: 0.5580% ( 17) 00:09:15.042 5873.034 - 5898.240: 0.7143% ( 28) 00:09:15.042 5898.240 - 5923.446: 0.8929% ( 32) 00:09:15.042 5923.446 - 5948.652: 1.0658% ( 31) 00:09:15.042 5948.652 - 5973.858: 1.3951% ( 59) 00:09:15.042 5973.858 - 5999.065: 1.7913% ( 71) 00:09:15.042 5999.065 - 6024.271: 2.5502% ( 136) 00:09:15.042 6024.271 - 6049.477: 3.1585% ( 109) 00:09:15.042 6049.477 - 6074.683: 3.9118% ( 135) 00:09:15.042 6074.683 - 6099.889: 4.8772% ( 173) 00:09:15.042 6099.889 - 6125.095: 6.0324% ( 207) 00:09:15.042 6125.095 - 6150.302: 7.3493% ( 236) 00:09:15.042 6150.302 - 6175.508: 8.5547% ( 216) 00:09:15.042 6175.508 - 6200.714: 9.9163% ( 244) 00:09:15.042 6200.714 - 6225.920: 10.9375% ( 183) 00:09:15.042 6225.920 - 6251.126: 12.1094% ( 210) 00:09:15.042 6251.126 - 6276.332: 13.7667% ( 297) 00:09:15.042 6276.332 - 6301.538: 15.1618% ( 250) 00:09:15.042 6301.538 - 6326.745: 16.5625% ( 251) 00:09:15.042 6326.745 - 6351.951: 18.2812% ( 308) 00:09:15.042 6351.951 - 6377.157: 20.0335% ( 314) 00:09:15.042 6377.157 - 6402.363: 22.4163% ( 427) 00:09:15.042 6402.363 - 6427.569: 24.5592% ( 384) 00:09:15.042 6427.569 - 6452.775: 27.9408% ( 606) 00:09:15.042 6452.775 - 6503.188: 32.7009% ( 853) 00:09:15.042 6503.188 - 6553.600: 37.8404% ( 921) 00:09:15.042 6553.600 - 6604.012: 42.2545% ( 791) 00:09:15.043 6604.012 - 6654.425: 46.2835% ( 722) 00:09:15.043 6654.425 - 6704.837: 49.6596% ( 605) 00:09:15.043 6704.837 - 6755.249: 53.7891% ( 740) 00:09:15.043 6755.249 - 6805.662: 58.0971% ( 772) 00:09:15.043 6805.662 - 6856.074: 62.4330% ( 777) 00:09:15.043 6856.074 - 6906.486: 65.4074% ( 533) 00:09:15.043 6906.486 - 6956.898: 69.0290% ( 649) 00:09:15.043 6956.898 - 7007.311: 72.1987% ( 568) 00:09:15.043 7007.311 - 7057.723: 75.6306% ( 615) 00:09:15.043 7057.723 - 7108.135: 78.4542% ( 506) 00:09:15.043 7108.135 - 7158.548: 80.2400% ( 320) 00:09:15.043 7158.548 - 7208.960: 82.2545% ( 361) 00:09:15.043 7208.960 - 7259.372: 83.5882% ( 239) 00:09:15.043 7259.372 - 7309.785: 84.8158% ( 220) 00:09:15.043 7309.785 - 7360.197: 86.0714% ( 225) 00:09:15.043 7360.197 - 7410.609: 87.1094% ( 186) 00:09:15.043 7410.609 - 7461.022: 88.0692% ( 172) 00:09:15.043 7461.022 - 7511.434: 88.8114% ( 133) 00:09:15.043 7511.434 - 7561.846: 89.4141% ( 108) 00:09:15.043 7561.846 - 7612.258: 89.9107% ( 89) 00:09:15.043 7612.258 - 7662.671: 90.2288% ( 57) 00:09:15.043 7662.671 - 7713.083: 90.4799% ( 45) 00:09:15.043 7713.083 - 7763.495: 90.7533% ( 49) 00:09:15.043 7763.495 - 7813.908: 90.9877% ( 42) 00:09:15.043 7813.908 - 7864.320: 91.1496% ( 29) 00:09:15.043 7864.320 - 7914.732: 91.2946% ( 26) 00:09:15.043 7914.732 - 7965.145: 91.3728% ( 14) 00:09:15.043 7965.145 - 8015.557: 91.4397% ( 12) 00:09:15.043 8015.557 - 8065.969: 91.4788% ( 7) 00:09:15.043 8065.969 - 8116.382: 91.5290% ( 9) 00:09:15.043 8116.382 - 8166.794: 91.5681% ( 7) 00:09:15.043 8166.794 - 8217.206: 91.6127% ( 8) 00:09:15.043 8217.206 - 8267.618: 91.6462% ( 6) 00:09:15.043 8267.618 - 8318.031: 91.6908% ( 8) 00:09:15.043 8318.031 - 8368.443: 91.7243% ( 6) 00:09:15.043 8368.443 - 8418.855: 91.7913% ( 12) 00:09:15.043 8418.855 - 8469.268: 91.8862% ( 17) 00:09:15.043 8469.268 - 8519.680: 92.0089% ( 22) 00:09:15.043 8519.680 - 8570.092: 92.4275% ( 75) 00:09:15.043 8570.092 - 8620.505: 92.4721% ( 8) 00:09:15.043 8620.505 - 8670.917: 92.5279% ( 10) 00:09:15.043 8670.917 - 8721.329: 92.5725% ( 8) 00:09:15.043 8721.329 - 8771.742: 92.6228% ( 9) 00:09:15.043 8771.742 - 8822.154: 92.6618% ( 7) 00:09:15.043 8822.154 - 8872.566: 92.7009% ( 7) 00:09:15.043 8872.566 - 8922.978: 92.7400% ( 7) 00:09:15.043 8922.978 - 8973.391: 92.7958% ( 10) 00:09:15.043 8973.391 - 9023.803: 92.9408% ( 26) 00:09:15.043 9023.803 - 9074.215: 93.0078% ( 12) 00:09:15.043 9074.215 - 9124.628: 93.0804% ( 13) 00:09:15.043 9124.628 - 9175.040: 93.1083% ( 5) 00:09:15.043 9175.040 - 9225.452: 93.1417% ( 6) 00:09:15.043 9225.452 - 9275.865: 93.1641% ( 4) 00:09:15.043 9275.865 - 9326.277: 93.1920% ( 5) 00:09:15.043 9326.277 - 9376.689: 93.2199% ( 5) 00:09:15.043 9376.689 - 9427.102: 93.2478% ( 5) 00:09:15.043 9427.102 - 9477.514: 93.2812% ( 6) 00:09:15.043 9477.514 - 9527.926: 93.3147% ( 6) 00:09:15.043 9527.926 - 9578.338: 93.3371% ( 4) 00:09:15.043 9578.338 - 9628.751: 93.3873% ( 9) 00:09:15.043 9628.751 - 9679.163: 93.4375% ( 9) 00:09:15.043 9679.163 - 9729.575: 93.4821% ( 8) 00:09:15.043 9729.575 - 9779.988: 93.5212% ( 7) 00:09:15.043 9779.988 - 9830.400: 93.5826% ( 11) 00:09:15.043 9830.400 - 9880.812: 93.6663% ( 15) 00:09:15.043 9880.812 - 9931.225: 93.7500% ( 15) 00:09:15.043 9931.225 - 9981.637: 93.8225% ( 13) 00:09:15.043 9981.637 - 10032.049: 93.9286% ( 19) 00:09:15.043 10032.049 - 10082.462: 94.0234% ( 17) 00:09:15.043 10082.462 - 10132.874: 94.0960% ( 13) 00:09:15.043 10132.874 - 10183.286: 94.1574% ( 11) 00:09:15.043 10183.286 - 10233.698: 94.1964% ( 7) 00:09:15.043 10233.698 - 10284.111: 94.2355% ( 7) 00:09:15.043 10284.111 - 10334.523: 94.2801% ( 8) 00:09:15.043 10334.523 - 10384.935: 94.3359% ( 10) 00:09:15.043 10384.935 - 10435.348: 94.3973% ( 11) 00:09:15.043 10435.348 - 10485.760: 94.4810% ( 15) 00:09:15.043 10485.760 - 10536.172: 94.5592% ( 14) 00:09:15.043 10536.172 - 10586.585: 94.6261% ( 12) 00:09:15.043 10586.585 - 10636.997: 94.7098% ( 15) 00:09:15.043 10636.997 - 10687.409: 94.7824% ( 13) 00:09:15.043 10687.409 - 10737.822: 94.8549% ( 13) 00:09:15.043 10737.822 - 10788.234: 94.9275% ( 13) 00:09:15.043 10788.234 - 10838.646: 95.0112% ( 15) 00:09:15.043 10838.646 - 10889.058: 95.0893% ( 14) 00:09:15.043 10889.058 - 10939.471: 95.1507% ( 11) 00:09:15.043 10939.471 - 10989.883: 95.2344% ( 15) 00:09:15.043 10989.883 - 11040.295: 95.2846% ( 9) 00:09:15.043 11040.295 - 11090.708: 95.3348% ( 9) 00:09:15.043 11090.708 - 11141.120: 95.3739% ( 7) 00:09:15.043 11141.120 - 11191.532: 95.4074% ( 6) 00:09:15.043 11191.532 - 11241.945: 95.4408% ( 6) 00:09:15.043 11241.945 - 11292.357: 95.4576% ( 3) 00:09:15.043 11292.357 - 11342.769: 95.4911% ( 6) 00:09:15.043 11342.769 - 11393.182: 95.5134% ( 4) 00:09:15.043 11393.182 - 11443.594: 95.5636% ( 9) 00:09:15.043 11443.594 - 11494.006: 95.6250% ( 11) 00:09:15.043 11494.006 - 11544.418: 95.6920% ( 12) 00:09:15.043 11544.418 - 11594.831: 95.7478% ( 10) 00:09:15.043 11594.831 - 11645.243: 95.7812% ( 6) 00:09:15.043 11645.243 - 11695.655: 95.8147% ( 6) 00:09:15.043 11695.655 - 11746.068: 95.8482% ( 6) 00:09:15.043 11746.068 - 11796.480: 95.8873% ( 7) 00:09:15.043 11796.480 - 11846.892: 95.9319% ( 8) 00:09:15.043 11846.892 - 11897.305: 95.9654% ( 6) 00:09:15.043 11897.305 - 11947.717: 95.9933% ( 5) 00:09:15.043 11947.717 - 11998.129: 96.0100% ( 3) 00:09:15.043 11998.129 - 12048.542: 96.0268% ( 3) 00:09:15.043 12048.542 - 12098.954: 96.0491% ( 4) 00:09:15.043 12098.954 - 12149.366: 96.0826% ( 6) 00:09:15.043 12149.366 - 12199.778: 96.1049% ( 4) 00:09:15.043 12199.778 - 12250.191: 96.1272% ( 4) 00:09:15.043 12250.191 - 12300.603: 96.1607% ( 6) 00:09:15.043 12300.603 - 12351.015: 96.1830% ( 4) 00:09:15.043 12351.015 - 12401.428: 96.2109% ( 5) 00:09:15.043 12401.428 - 12451.840: 96.2388% ( 5) 00:09:15.043 12451.840 - 12502.252: 96.2723% ( 6) 00:09:15.043 12502.252 - 12552.665: 96.3002% ( 5) 00:09:15.043 12552.665 - 12603.077: 96.3281% ( 5) 00:09:15.043 12603.077 - 12653.489: 96.4007% ( 13) 00:09:15.043 12653.489 - 12703.902: 96.4565% ( 10) 00:09:15.043 12703.902 - 12754.314: 96.5346% ( 14) 00:09:15.043 12754.314 - 12804.726: 96.6239% ( 16) 00:09:15.043 12804.726 - 12855.138: 96.7020% ( 14) 00:09:15.043 12855.138 - 12905.551: 96.7857% ( 15) 00:09:15.043 12905.551 - 13006.375: 96.9475% ( 29) 00:09:15.043 13006.375 - 13107.200: 97.1094% ( 29) 00:09:15.043 13107.200 - 13208.025: 97.2712% ( 29) 00:09:15.043 13208.025 - 13308.849: 97.4330% ( 29) 00:09:15.043 13308.849 - 13409.674: 97.5781% ( 26) 00:09:15.043 13409.674 - 13510.498: 97.7121% ( 24) 00:09:15.043 13510.498 - 13611.323: 97.8460% ( 24) 00:09:15.043 13611.323 - 13712.148: 97.9688% ( 22) 00:09:15.043 13712.148 - 13812.972: 98.1083% ( 25) 00:09:15.043 13812.972 - 13913.797: 98.2422% ( 24) 00:09:15.043 13913.797 - 14014.622: 98.4096% ( 30) 00:09:15.043 14014.622 - 14115.446: 98.5658% ( 28) 00:09:15.043 14115.446 - 14216.271: 98.7221% ( 28) 00:09:15.043 14216.271 - 14317.095: 98.8393% ( 21) 00:09:15.043 14317.095 - 14417.920: 98.9509% ( 20) 00:09:15.043 14417.920 - 14518.745: 99.0513% ( 18) 00:09:15.043 14518.745 - 14619.569: 99.1071% ( 10) 00:09:15.043 14619.569 - 14720.394: 99.1350% ( 5) 00:09:15.043 14720.394 - 14821.218: 99.1629% ( 5) 00:09:15.043 14821.218 - 14922.043: 99.1964% ( 6) 00:09:15.043 14922.043 - 15022.868: 99.2243% ( 5) 00:09:15.044 15022.868 - 15123.692: 99.2522% ( 5) 00:09:15.044 15123.692 - 15224.517: 99.2801% ( 5) 00:09:15.044 15224.517 - 15325.342: 99.2857% ( 1) 00:09:15.044 23088.837 - 23189.662: 99.3080% ( 4) 00:09:15.044 23189.662 - 23290.486: 99.3694% ( 11) 00:09:15.044 23290.486 - 23391.311: 99.3806% ( 2) 00:09:15.044 23391.311 - 23492.135: 99.3862% ( 1) 00:09:15.044 23492.135 - 23592.960: 99.3917% ( 1) 00:09:15.044 23592.960 - 23693.785: 99.3973% ( 1) 00:09:15.044 23693.785 - 23794.609: 99.4754% ( 14) 00:09:15.044 23794.609 - 23895.434: 99.5815% ( 19) 00:09:15.044 23895.434 - 23996.258: 99.6373% ( 10) 00:09:15.044 23996.258 - 24097.083: 99.7154% ( 14) 00:09:15.044 24097.083 - 24197.908: 99.8382% ( 22) 00:09:15.044 24197.908 - 24298.732: 99.8493% ( 2) 00:09:15.044 24298.732 - 24399.557: 99.8661% ( 3) 00:09:15.044 24399.557 - 24500.382: 99.8828% ( 3) 00:09:15.044 24500.382 - 24601.206: 99.8996% ( 3) 00:09:15.044 24601.206 - 24702.031: 99.9107% ( 2) 00:09:15.044 24702.031 - 24802.855: 99.9330% ( 4) 00:09:15.044 24802.855 - 24903.680: 99.9498% ( 3) 00:09:15.044 24903.680 - 25004.505: 99.9665% ( 3) 00:09:15.044 25004.505 - 25105.329: 99.9833% ( 3) 00:09:15.044 25105.329 - 25206.154: 100.0000% ( 3) 00:09:15.044 00:09:15.044 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:15.044 ============================================================================== 00:09:15.044 Range in us Cumulative IO count 00:09:15.044 5343.705 - 5368.911: 0.0056% ( 1) 00:09:15.044 5394.117 - 5419.323: 0.0112% ( 1) 00:09:15.044 5419.323 - 5444.529: 0.0167% ( 1) 00:09:15.044 5469.735 - 5494.942: 0.0335% ( 3) 00:09:15.044 5494.942 - 5520.148: 0.0502% ( 3) 00:09:15.044 5520.148 - 5545.354: 0.0558% ( 1) 00:09:15.044 5570.560 - 5595.766: 0.0670% ( 2) 00:09:15.044 5595.766 - 5620.972: 0.1060% ( 7) 00:09:15.044 5620.972 - 5646.178: 0.1842% ( 14) 00:09:15.044 5646.178 - 5671.385: 0.2344% ( 9) 00:09:15.044 5671.385 - 5696.591: 0.3125% ( 14) 00:09:15.044 5696.591 - 5721.797: 0.3850% ( 13) 00:09:15.044 5721.797 - 5747.003: 0.4576% ( 13) 00:09:15.044 5747.003 - 5772.209: 0.5580% ( 18) 00:09:15.044 5772.209 - 5797.415: 0.6306% ( 13) 00:09:15.044 5797.415 - 5822.622: 0.7422% ( 20) 00:09:15.044 5822.622 - 5847.828: 0.8538% ( 20) 00:09:15.044 5847.828 - 5873.034: 0.9821% ( 23) 00:09:15.044 5873.034 - 5898.240: 1.0993% ( 21) 00:09:15.044 5898.240 - 5923.446: 1.3002% ( 36) 00:09:15.044 5923.446 - 5948.652: 1.5458% ( 44) 00:09:15.044 5948.652 - 5973.858: 1.8304% ( 51) 00:09:15.044 5973.858 - 5999.065: 2.2098% ( 68) 00:09:15.044 5999.065 - 6024.271: 3.0357% ( 148) 00:09:15.044 6024.271 - 6049.477: 3.8449% ( 145) 00:09:15.044 6049.477 - 6074.683: 4.6652% ( 147) 00:09:15.044 6074.683 - 6099.889: 5.4520% ( 141) 00:09:15.044 6099.889 - 6125.095: 6.2779% ( 148) 00:09:15.044 6125.095 - 6150.302: 7.4665% ( 213) 00:09:15.044 6150.302 - 6175.508: 8.5491% ( 194) 00:09:15.044 6175.508 - 6200.714: 9.6373% ( 195) 00:09:15.044 6200.714 - 6225.920: 10.9040% ( 227) 00:09:15.044 6225.920 - 6251.126: 12.0592% ( 207) 00:09:15.044 6251.126 - 6276.332: 13.3371% ( 229) 00:09:15.044 6276.332 - 6301.538: 14.6708% ( 239) 00:09:15.044 6301.538 - 6326.745: 16.2388% ( 281) 00:09:15.044 6326.745 - 6351.951: 18.0357% ( 322) 00:09:15.044 6351.951 - 6377.157: 19.6987% ( 298) 00:09:15.044 6377.157 - 6402.363: 21.8862% ( 392) 00:09:15.044 6402.363 - 6427.569: 24.4587% ( 461) 00:09:15.044 6427.569 - 6452.775: 26.8359% ( 426) 00:09:15.044 6452.775 - 6503.188: 32.0982% ( 943) 00:09:15.044 6503.188 - 6553.600: 38.4877% ( 1145) 00:09:15.044 6553.600 - 6604.012: 42.5614% ( 730) 00:09:15.044 6604.012 - 6654.425: 46.5904% ( 722) 00:09:15.044 6654.425 - 6704.837: 50.5022% ( 701) 00:09:15.044 6704.837 - 6755.249: 54.0290% ( 632) 00:09:15.044 6755.249 - 6805.662: 57.3605% ( 597) 00:09:15.044 6805.662 - 6856.074: 60.4911% ( 561) 00:09:15.044 6856.074 - 6906.486: 64.5424% ( 726) 00:09:15.044 6906.486 - 6956.898: 68.2757% ( 669) 00:09:15.044 6956.898 - 7007.311: 71.4118% ( 562) 00:09:15.044 7007.311 - 7057.723: 73.9230% ( 450) 00:09:15.044 7057.723 - 7108.135: 76.6574% ( 490) 00:09:15.044 7108.135 - 7158.548: 78.9509% ( 411) 00:09:15.044 7158.548 - 7208.960: 80.6585% ( 306) 00:09:15.044 7208.960 - 7259.372: 81.9866% ( 238) 00:09:15.044 7259.372 - 7309.785: 83.4431% ( 261) 00:09:15.044 7309.785 - 7360.197: 84.4475% ( 180) 00:09:15.044 7360.197 - 7410.609: 85.4185% ( 174) 00:09:15.044 7410.609 - 7461.022: 86.1998% ( 140) 00:09:15.044 7461.022 - 7511.434: 86.9922% ( 142) 00:09:15.044 7511.434 - 7561.846: 87.6562% ( 119) 00:09:15.044 7561.846 - 7612.258: 88.3092% ( 117) 00:09:15.044 7612.258 - 7662.671: 88.8281% ( 93) 00:09:15.044 7662.671 - 7713.083: 89.3694% ( 97) 00:09:15.044 7713.083 - 7763.495: 89.9609% ( 106) 00:09:15.044 7763.495 - 7813.908: 90.2288% ( 48) 00:09:15.044 7813.908 - 7864.320: 90.4074% ( 32) 00:09:15.044 7864.320 - 7914.732: 90.5469% ( 25) 00:09:15.044 7914.732 - 7965.145: 90.6696% ( 22) 00:09:15.044 7965.145 - 8015.557: 90.7812% ( 20) 00:09:15.044 8015.557 - 8065.969: 90.8929% ( 20) 00:09:15.044 8065.969 - 8116.382: 90.9710% ( 14) 00:09:15.044 8116.382 - 8166.794: 91.0658% ( 17) 00:09:15.044 8166.794 - 8217.206: 91.2165% ( 27) 00:09:15.044 8217.206 - 8267.618: 91.2946% ( 14) 00:09:15.044 8267.618 - 8318.031: 91.3504% ( 10) 00:09:15.044 8318.031 - 8368.443: 91.4007% ( 9) 00:09:15.044 8368.443 - 8418.855: 91.4509% ( 9) 00:09:15.044 8418.855 - 8469.268: 91.5123% ( 11) 00:09:15.044 8469.268 - 8519.680: 91.6016% ( 16) 00:09:15.044 8519.680 - 8570.092: 91.6741% ( 13) 00:09:15.044 8570.092 - 8620.505: 91.7355% ( 11) 00:09:15.044 8620.505 - 8670.917: 91.7913% ( 10) 00:09:15.044 8670.917 - 8721.329: 91.8192% ( 5) 00:09:15.044 8721.329 - 8771.742: 91.8471% ( 5) 00:09:15.044 8771.742 - 8822.154: 91.8806% ( 6) 00:09:15.044 8822.154 - 8872.566: 91.9308% ( 9) 00:09:15.044 8872.566 - 8922.978: 91.9866% ( 10) 00:09:15.044 8922.978 - 8973.391: 92.0759% ( 16) 00:09:15.044 8973.391 - 9023.803: 92.1763% ( 18) 00:09:15.044 9023.803 - 9074.215: 92.2489% ( 13) 00:09:15.044 9074.215 - 9124.628: 92.3438% ( 17) 00:09:15.044 9124.628 - 9175.040: 92.4219% ( 14) 00:09:15.044 9175.040 - 9225.452: 92.4833% ( 11) 00:09:15.044 9225.452 - 9275.865: 92.5502% ( 12) 00:09:15.044 9275.865 - 9326.277: 92.7288% ( 32) 00:09:15.044 9326.277 - 9376.689: 92.8013% ( 13) 00:09:15.044 9376.689 - 9427.102: 92.8627% ( 11) 00:09:15.044 9427.102 - 9477.514: 92.9353% ( 13) 00:09:15.044 9477.514 - 9527.926: 92.9855% ( 9) 00:09:15.044 9527.926 - 9578.338: 93.0525% ( 12) 00:09:15.044 9578.338 - 9628.751: 93.1250% ( 13) 00:09:15.044 9628.751 - 9679.163: 93.2143% ( 16) 00:09:15.044 9679.163 - 9729.575: 93.3482% ( 24) 00:09:15.044 9729.575 - 9779.988: 93.4096% ( 11) 00:09:15.044 9779.988 - 9830.400: 93.4598% ( 9) 00:09:15.044 9830.400 - 9880.812: 93.5100% ( 9) 00:09:15.044 9880.812 - 9931.225: 93.5547% ( 8) 00:09:15.044 9931.225 - 9981.637: 93.6049% ( 9) 00:09:15.044 9981.637 - 10032.049: 93.6607% ( 10) 00:09:15.044 10032.049 - 10082.462: 93.7221% ( 11) 00:09:15.044 10082.462 - 10132.874: 93.7891% ( 12) 00:09:15.044 10132.874 - 10183.286: 93.8393% ( 9) 00:09:15.044 10183.286 - 10233.698: 93.8895% ( 9) 00:09:15.044 10233.698 - 10284.111: 93.9342% ( 8) 00:09:15.044 10284.111 - 10334.523: 93.9844% ( 9) 00:09:15.044 10334.523 - 10384.935: 94.0290% ( 8) 00:09:15.044 10384.935 - 10435.348: 94.0848% ( 10) 00:09:15.044 10435.348 - 10485.760: 94.1406% ( 10) 00:09:15.044 10485.760 - 10536.172: 94.2634% ( 22) 00:09:15.044 10536.172 - 10586.585: 94.3527% ( 16) 00:09:15.044 10586.585 - 10636.997: 94.4643% ( 20) 00:09:15.044 10636.997 - 10687.409: 94.5982% ( 24) 00:09:15.044 10687.409 - 10737.822: 94.7377% ( 25) 00:09:15.045 10737.822 - 10788.234: 94.8772% ( 25) 00:09:15.045 10788.234 - 10838.646: 95.0167% ( 25) 00:09:15.045 10838.646 - 10889.058: 95.1283% ( 20) 00:09:15.045 10889.058 - 10939.471: 95.2232% ( 17) 00:09:15.045 10939.471 - 10989.883: 95.3460% ( 22) 00:09:15.045 10989.883 - 11040.295: 95.5134% ( 30) 00:09:15.045 11040.295 - 11090.708: 95.7422% ( 41) 00:09:15.045 11090.708 - 11141.120: 95.8482% ( 19) 00:09:15.045 11141.120 - 11191.532: 95.9208% ( 13) 00:09:15.045 11191.532 - 11241.945: 95.9989% ( 14) 00:09:15.045 11241.945 - 11292.357: 96.0826% ( 15) 00:09:15.045 11292.357 - 11342.769: 96.1607% ( 14) 00:09:15.045 11342.769 - 11393.182: 96.2556% ( 17) 00:09:15.045 11393.182 - 11443.594: 96.3504% ( 17) 00:09:15.045 11443.594 - 11494.006: 96.4565% ( 19) 00:09:15.045 11494.006 - 11544.418: 96.5681% ( 20) 00:09:15.045 11544.418 - 11594.831: 96.6964% ( 23) 00:09:15.045 11594.831 - 11645.243: 96.7913% ( 17) 00:09:15.045 11645.243 - 11695.655: 96.8862% ( 17) 00:09:15.045 11695.655 - 11746.068: 96.9643% ( 14) 00:09:15.045 11746.068 - 11796.480: 97.0201% ( 10) 00:09:15.045 11796.480 - 11846.892: 97.0926% ( 13) 00:09:15.045 11846.892 - 11897.305: 97.1540% ( 11) 00:09:15.045 11897.305 - 11947.717: 97.2266% ( 13) 00:09:15.045 11947.717 - 11998.129: 97.2935% ( 12) 00:09:15.045 11998.129 - 12048.542: 97.3326% ( 7) 00:09:15.045 12048.542 - 12098.954: 97.3717% ( 7) 00:09:15.045 12098.954 - 12149.366: 97.3996% ( 5) 00:09:15.045 12149.366 - 12199.778: 97.4330% ( 6) 00:09:15.045 12199.778 - 12250.191: 97.4609% ( 5) 00:09:15.045 12250.191 - 12300.603: 97.4944% ( 6) 00:09:15.045 12300.603 - 12351.015: 97.5223% ( 5) 00:09:15.045 12351.015 - 12401.428: 97.5558% ( 6) 00:09:15.045 12401.428 - 12451.840: 97.5837% ( 5) 00:09:15.045 12451.840 - 12502.252: 97.6060% ( 4) 00:09:15.045 12502.252 - 12552.665: 97.6172% ( 2) 00:09:15.045 12552.665 - 12603.077: 97.6339% ( 3) 00:09:15.045 12603.077 - 12653.489: 97.6451% ( 2) 00:09:15.045 12653.489 - 12703.902: 97.6618% ( 3) 00:09:15.045 12703.902 - 12754.314: 97.6730% ( 2) 00:09:15.045 12754.314 - 12804.726: 97.6897% ( 3) 00:09:15.045 12804.726 - 12855.138: 97.7065% ( 3) 00:09:15.045 12855.138 - 12905.551: 97.7232% ( 3) 00:09:15.045 12905.551 - 13006.375: 97.8013% ( 14) 00:09:15.045 13006.375 - 13107.200: 97.8627% ( 11) 00:09:15.045 13107.200 - 13208.025: 97.9297% ( 12) 00:09:15.045 13208.025 - 13308.849: 98.0078% ( 14) 00:09:15.045 13308.849 - 13409.674: 98.0915% ( 15) 00:09:15.045 13409.674 - 13510.498: 98.1529% ( 11) 00:09:15.045 13510.498 - 13611.323: 98.2199% ( 12) 00:09:15.045 13611.323 - 13712.148: 98.2757% ( 10) 00:09:15.045 13712.148 - 13812.972: 98.3371% ( 11) 00:09:15.045 13812.972 - 13913.797: 98.3873% ( 9) 00:09:15.045 13913.797 - 14014.622: 98.4487% ( 11) 00:09:15.045 14014.622 - 14115.446: 98.5212% ( 13) 00:09:15.045 14115.446 - 14216.271: 98.5714% ( 9) 00:09:15.045 14216.271 - 14317.095: 98.6328% ( 11) 00:09:15.045 14317.095 - 14417.920: 98.6830% ( 9) 00:09:15.045 14417.920 - 14518.745: 98.7556% ( 13) 00:09:15.045 14518.745 - 14619.569: 98.8114% ( 10) 00:09:15.045 14619.569 - 14720.394: 98.8728% ( 11) 00:09:15.045 14720.394 - 14821.218: 98.9342% ( 11) 00:09:15.045 14821.218 - 14922.043: 99.0011% ( 12) 00:09:15.045 14922.043 - 15022.868: 99.0458% ( 8) 00:09:15.045 15022.868 - 15123.692: 99.1127% ( 12) 00:09:15.045 15123.692 - 15224.517: 99.1741% ( 11) 00:09:15.045 15224.517 - 15325.342: 99.2299% ( 10) 00:09:15.045 15325.342 - 15426.166: 99.2690% ( 7) 00:09:15.045 15426.166 - 15526.991: 99.2801% ( 2) 00:09:15.045 15526.991 - 15627.815: 99.2857% ( 1) 00:09:15.045 20769.871 - 20870.695: 99.2913% ( 1) 00:09:15.045 20870.695 - 20971.520: 99.3080% ( 3) 00:09:15.045 20971.520 - 21072.345: 99.3304% ( 4) 00:09:15.045 21072.345 - 21173.169: 99.3471% ( 3) 00:09:15.045 21173.169 - 21273.994: 99.3694% ( 4) 00:09:15.045 21273.994 - 21374.818: 99.3862% ( 3) 00:09:15.045 21374.818 - 21475.643: 99.4085% ( 4) 00:09:15.045 21475.643 - 21576.468: 99.4308% ( 4) 00:09:15.045 21576.468 - 21677.292: 99.4475% ( 3) 00:09:15.045 21677.292 - 21778.117: 99.4699% ( 4) 00:09:15.045 21778.117 - 21878.942: 99.4866% ( 3) 00:09:15.045 21878.942 - 21979.766: 99.5033% ( 3) 00:09:15.045 21979.766 - 22080.591: 99.5257% ( 4) 00:09:15.045 22080.591 - 22181.415: 99.5424% ( 3) 00:09:15.045 22181.415 - 22282.240: 99.5647% ( 4) 00:09:15.045 22282.240 - 22383.065: 99.5815% ( 3) 00:09:15.045 22383.065 - 22483.889: 99.6038% ( 4) 00:09:15.045 22483.889 - 22584.714: 99.6261% ( 4) 00:09:15.045 22584.714 - 22685.538: 99.6429% ( 3) 00:09:15.045 22685.538 - 22786.363: 99.6652% ( 4) 00:09:15.045 22786.363 - 22887.188: 99.6819% ( 3) 00:09:15.045 22887.188 - 22988.012: 99.6987% ( 3) 00:09:15.045 22988.012 - 23088.837: 99.7210% ( 4) 00:09:15.045 23088.837 - 23189.662: 99.7377% ( 3) 00:09:15.045 23189.662 - 23290.486: 99.7600% ( 4) 00:09:15.045 23290.486 - 23391.311: 99.7768% ( 3) 00:09:15.045 23391.311 - 23492.135: 99.7991% ( 4) 00:09:15.045 23492.135 - 23592.960: 99.8158% ( 3) 00:09:15.045 23592.960 - 23693.785: 99.8382% ( 4) 00:09:15.045 23693.785 - 23794.609: 99.8605% ( 4) 00:09:15.045 23794.609 - 23895.434: 99.8772% ( 3) 00:09:15.045 23895.434 - 23996.258: 99.8996% ( 4) 00:09:15.045 23996.258 - 24097.083: 99.9163% ( 3) 00:09:15.045 24097.083 - 24197.908: 99.9386% ( 4) 00:09:15.045 24197.908 - 24298.732: 99.9554% ( 3) 00:09:15.045 24298.732 - 24399.557: 99.9777% ( 4) 00:09:15.045 24399.557 - 24500.382: 100.0000% ( 4) 00:09:15.045 00:09:15.045 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:15.045 ============================================================================== 00:09:15.045 Range in us Cumulative IO count 00:09:15.045 5293.292 - 5318.498: 0.0056% ( 1) 00:09:15.045 5494.942 - 5520.148: 0.0112% ( 1) 00:09:15.045 5570.560 - 5595.766: 0.0223% ( 2) 00:09:15.045 5595.766 - 5620.972: 0.0446% ( 4) 00:09:15.045 5620.972 - 5646.178: 0.0781% ( 6) 00:09:15.045 5646.178 - 5671.385: 0.1283% ( 9) 00:09:15.045 5671.385 - 5696.591: 0.1897% ( 11) 00:09:15.045 5696.591 - 5721.797: 0.2455% ( 10) 00:09:15.045 5721.797 - 5747.003: 0.3125% ( 12) 00:09:15.045 5747.003 - 5772.209: 0.3962% ( 15) 00:09:15.045 5772.209 - 5797.415: 0.4799% ( 15) 00:09:15.045 5797.415 - 5822.622: 0.5859% ( 19) 00:09:15.045 5822.622 - 5847.828: 0.6975% ( 20) 00:09:15.045 5847.828 - 5873.034: 0.8315% ( 24) 00:09:15.045 5873.034 - 5898.240: 1.0491% ( 39) 00:09:15.045 5898.240 - 5923.446: 1.2891% ( 43) 00:09:15.045 5923.446 - 5948.652: 1.5737% ( 51) 00:09:15.045 5948.652 - 5973.858: 1.8973% ( 58) 00:09:15.045 5973.858 - 5999.065: 2.3940% ( 89) 00:09:15.045 5999.065 - 6024.271: 2.9353% ( 97) 00:09:15.045 6024.271 - 6049.477: 3.6440% ( 127) 00:09:15.045 6049.477 - 6074.683: 4.6484% ( 180) 00:09:15.045 6074.683 - 6099.889: 5.6306% ( 176) 00:09:15.045 6099.889 - 6125.095: 6.6295% ( 179) 00:09:15.045 6125.095 - 6150.302: 7.8125% ( 212) 00:09:15.045 6150.302 - 6175.508: 9.1239% ( 235) 00:09:15.045 6175.508 - 6200.714: 10.6083% ( 266) 00:09:15.045 6200.714 - 6225.920: 11.8304% ( 219) 00:09:15.045 6225.920 - 6251.126: 13.1083% ( 229) 00:09:15.045 6251.126 - 6276.332: 14.4643% ( 243) 00:09:15.045 6276.332 - 6301.538: 15.8371% ( 246) 00:09:15.045 6301.538 - 6326.745: 17.1652% ( 238) 00:09:15.045 6326.745 - 6351.951: 18.6328% ( 263) 00:09:15.045 6351.951 - 6377.157: 20.3795% ( 313) 00:09:15.045 6377.157 - 6402.363: 22.5446% ( 388) 00:09:15.045 6402.363 - 6427.569: 25.0056% ( 441) 00:09:15.045 6427.569 - 6452.775: 27.3661% ( 423) 00:09:15.045 6452.775 - 6503.188: 32.6507% ( 947) 00:09:15.045 6503.188 - 6553.600: 37.7232% ( 909) 00:09:15.046 6553.600 - 6604.012: 42.3047% ( 821) 00:09:15.046 6604.012 - 6654.425: 46.7411% ( 795) 00:09:15.046 6654.425 - 6704.837: 50.7533% ( 719) 00:09:15.046 6704.837 - 6755.249: 54.0904% ( 598) 00:09:15.046 6755.249 - 6805.662: 57.6618% ( 640) 00:09:15.046 6805.662 - 6856.074: 60.4353% ( 497) 00:09:15.046 6856.074 - 6906.486: 64.2355% ( 681) 00:09:15.046 6906.486 - 6956.898: 67.8013% ( 639) 00:09:15.046 6956.898 - 7007.311: 71.3783% ( 641) 00:09:15.046 7007.311 - 7057.723: 74.2299% ( 511) 00:09:15.046 7057.723 - 7108.135: 76.7913% ( 459) 00:09:15.046 7108.135 - 7158.548: 79.0067% ( 397) 00:09:15.046 7158.548 - 7208.960: 80.6641% ( 297) 00:09:15.046 7208.960 - 7259.372: 82.3772% ( 307) 00:09:15.046 7259.372 - 7309.785: 84.2076% ( 328) 00:09:15.046 7309.785 - 7360.197: 85.3571% ( 206) 00:09:15.046 7360.197 - 7410.609: 86.3114% ( 171) 00:09:15.046 7410.609 - 7461.022: 87.1540% ( 151) 00:09:15.046 7461.022 - 7511.434: 88.0022% ( 152) 00:09:15.046 7511.434 - 7561.846: 88.8002% ( 143) 00:09:15.046 7561.846 - 7612.258: 89.4308% ( 113) 00:09:15.046 7612.258 - 7662.671: 90.0167% ( 105) 00:09:15.046 7662.671 - 7713.083: 90.4632% ( 80) 00:09:15.046 7713.083 - 7763.495: 90.8036% ( 61) 00:09:15.046 7763.495 - 7813.908: 91.4342% ( 113) 00:09:15.046 7813.908 - 7864.320: 91.6295% ( 35) 00:09:15.046 7864.320 - 7914.732: 91.7578% ( 23) 00:09:15.046 7914.732 - 7965.145: 91.8527% ( 17) 00:09:15.046 7965.145 - 8015.557: 92.0089% ( 28) 00:09:15.046 8015.557 - 8065.969: 92.1429% ( 24) 00:09:15.046 8065.969 - 8116.382: 92.2712% ( 23) 00:09:15.046 8116.382 - 8166.794: 92.3828% ( 20) 00:09:15.046 8166.794 - 8217.206: 92.4609% ( 14) 00:09:15.046 8217.206 - 8267.618: 92.5335% ( 13) 00:09:15.046 8267.618 - 8318.031: 92.6116% ( 14) 00:09:15.046 8318.031 - 8368.443: 92.7009% ( 16) 00:09:15.046 8368.443 - 8418.855: 92.7344% ( 6) 00:09:15.046 8418.855 - 8469.268: 92.7623% ( 5) 00:09:15.046 8469.268 - 8519.680: 92.7958% ( 6) 00:09:15.046 8519.680 - 8570.092: 92.8292% ( 6) 00:09:15.046 8570.092 - 8620.505: 92.8571% ( 5) 00:09:15.046 8620.505 - 8670.917: 92.8906% ( 6) 00:09:15.046 8670.917 - 8721.329: 92.9185% ( 5) 00:09:15.046 8721.329 - 8771.742: 92.9464% ( 5) 00:09:15.046 8771.742 - 8822.154: 92.9799% ( 6) 00:09:15.046 8822.154 - 8872.566: 93.0078% ( 5) 00:09:15.046 8872.566 - 8922.978: 93.0357% ( 5) 00:09:15.046 8922.978 - 8973.391: 93.0636% ( 5) 00:09:15.046 8973.391 - 9023.803: 93.0915% ( 5) 00:09:15.046 9023.803 - 9074.215: 93.1250% ( 6) 00:09:15.046 9074.215 - 9124.628: 93.1529% ( 5) 00:09:15.046 9124.628 - 9175.040: 93.1920% ( 7) 00:09:15.046 9175.040 - 9225.452: 93.2310% ( 7) 00:09:15.046 9225.452 - 9275.865: 93.2645% ( 6) 00:09:15.046 9275.865 - 9326.277: 93.3036% ( 7) 00:09:15.046 9326.277 - 9376.689: 93.3371% ( 6) 00:09:15.046 9376.689 - 9427.102: 93.3761% ( 7) 00:09:15.046 9427.102 - 9477.514: 93.4096% ( 6) 00:09:15.046 9477.514 - 9527.926: 93.4431% ( 6) 00:09:15.046 9527.926 - 9578.338: 93.4654% ( 4) 00:09:15.046 9578.338 - 9628.751: 93.4821% ( 3) 00:09:15.046 9628.751 - 9679.163: 93.4989% ( 3) 00:09:15.046 9679.163 - 9729.575: 93.5156% ( 3) 00:09:15.046 9729.575 - 9779.988: 93.5547% ( 7) 00:09:15.046 9779.988 - 9830.400: 93.6049% ( 9) 00:09:15.046 9830.400 - 9880.812: 93.6496% ( 8) 00:09:15.046 9880.812 - 9931.225: 93.7388% ( 16) 00:09:15.046 9931.225 - 9981.637: 93.8002% ( 11) 00:09:15.046 9981.637 - 10032.049: 93.8616% ( 11) 00:09:15.046 10032.049 - 10082.462: 93.9286% ( 12) 00:09:15.046 10082.462 - 10132.874: 93.9900% ( 11) 00:09:15.046 10132.874 - 10183.286: 94.0402% ( 9) 00:09:15.046 10183.286 - 10233.698: 94.1016% ( 11) 00:09:15.046 10233.698 - 10284.111: 94.2746% ( 31) 00:09:15.046 10284.111 - 10334.523: 94.3304% ( 10) 00:09:15.046 10334.523 - 10384.935: 94.3806% ( 9) 00:09:15.046 10384.935 - 10435.348: 94.5312% ( 27) 00:09:15.046 10435.348 - 10485.760: 94.5871% ( 10) 00:09:15.046 10485.760 - 10536.172: 94.6484% ( 11) 00:09:15.046 10536.172 - 10586.585: 94.7042% ( 10) 00:09:15.046 10586.585 - 10636.997: 94.7545% ( 9) 00:09:15.046 10636.997 - 10687.409: 94.8103% ( 10) 00:09:15.046 10687.409 - 10737.822: 94.8661% ( 10) 00:09:15.046 10737.822 - 10788.234: 94.9330% ( 12) 00:09:15.046 10788.234 - 10838.646: 94.9665% ( 6) 00:09:15.046 10838.646 - 10889.058: 95.0056% ( 7) 00:09:15.046 10889.058 - 10939.471: 95.0502% ( 8) 00:09:15.046 10939.471 - 10989.883: 95.0837% ( 6) 00:09:15.046 10989.883 - 11040.295: 95.1228% ( 7) 00:09:15.046 11040.295 - 11090.708: 95.1674% ( 8) 00:09:15.046 11090.708 - 11141.120: 95.2009% ( 6) 00:09:15.046 11141.120 - 11191.532: 95.2679% ( 12) 00:09:15.046 11191.532 - 11241.945: 95.3404% ( 13) 00:09:15.046 11241.945 - 11292.357: 95.3962% ( 10) 00:09:15.046 11292.357 - 11342.769: 95.4297% ( 6) 00:09:15.046 11342.769 - 11393.182: 95.4520% ( 4) 00:09:15.046 11393.182 - 11443.594: 95.4743% ( 4) 00:09:15.046 11443.594 - 11494.006: 95.5022% ( 5) 00:09:15.046 11494.006 - 11544.418: 95.5301% ( 5) 00:09:15.046 11544.418 - 11594.831: 95.5580% ( 5) 00:09:15.046 11594.831 - 11645.243: 95.5915% ( 6) 00:09:15.046 11645.243 - 11695.655: 95.6306% ( 7) 00:09:15.046 11695.655 - 11746.068: 95.6808% ( 9) 00:09:15.046 11746.068 - 11796.480: 95.7143% ( 6) 00:09:15.046 11796.480 - 11846.892: 95.7701% ( 10) 00:09:15.046 11846.892 - 11897.305: 95.8092% ( 7) 00:09:15.046 11897.305 - 11947.717: 95.8482% ( 7) 00:09:15.046 11947.717 - 11998.129: 95.8984% ( 9) 00:09:15.046 11998.129 - 12048.542: 95.9375% ( 7) 00:09:15.046 12048.542 - 12098.954: 95.9877% ( 9) 00:09:15.046 12098.954 - 12149.366: 96.0268% ( 7) 00:09:15.046 12149.366 - 12199.778: 96.0714% ( 8) 00:09:15.046 12199.778 - 12250.191: 96.1161% ( 8) 00:09:15.046 12250.191 - 12300.603: 96.1496% ( 6) 00:09:15.046 12300.603 - 12351.015: 96.2054% ( 10) 00:09:15.046 12351.015 - 12401.428: 96.2444% ( 7) 00:09:15.046 12401.428 - 12451.840: 96.2891% ( 8) 00:09:15.046 12451.840 - 12502.252: 96.3337% ( 8) 00:09:15.047 12502.252 - 12552.665: 96.3895% ( 10) 00:09:15.047 12552.665 - 12603.077: 96.4509% ( 11) 00:09:15.047 12603.077 - 12653.489: 96.5179% ( 12) 00:09:15.047 12653.489 - 12703.902: 96.5960% ( 14) 00:09:15.047 12703.902 - 12754.314: 96.6741% ( 14) 00:09:15.047 12754.314 - 12804.726: 96.7299% ( 10) 00:09:15.047 12804.726 - 12855.138: 96.8136% ( 15) 00:09:15.047 12855.138 - 12905.551: 96.9196% ( 19) 00:09:15.047 12905.551 - 13006.375: 97.1038% ( 33) 00:09:15.047 13006.375 - 13107.200: 97.2377% ( 24) 00:09:15.047 13107.200 - 13208.025: 97.3605% ( 22) 00:09:15.047 13208.025 - 13308.849: 97.4721% ( 20) 00:09:15.047 13308.849 - 13409.674: 97.5391% ( 12) 00:09:15.047 13409.674 - 13510.498: 97.6116% ( 13) 00:09:15.047 13510.498 - 13611.323: 97.6786% ( 12) 00:09:15.047 13611.323 - 13712.148: 97.7455% ( 12) 00:09:15.047 13712.148 - 13812.972: 97.8181% ( 13) 00:09:15.047 13812.972 - 13913.797: 97.8850% ( 12) 00:09:15.047 13913.797 - 14014.622: 97.9576% ( 13) 00:09:15.047 14014.622 - 14115.446: 98.0301% ( 13) 00:09:15.047 14115.446 - 14216.271: 98.1027% ( 13) 00:09:15.047 14216.271 - 14317.095: 98.1752% ( 13) 00:09:15.047 14317.095 - 14417.920: 98.2478% ( 13) 00:09:15.047 14417.920 - 14518.745: 98.3147% ( 12) 00:09:15.047 14518.745 - 14619.569: 98.3817% ( 12) 00:09:15.047 14619.569 - 14720.394: 98.4598% ( 14) 00:09:15.047 14720.394 - 14821.218: 98.5658% ( 19) 00:09:15.047 14821.218 - 14922.043: 98.6719% ( 19) 00:09:15.047 14922.043 - 15022.868: 98.7444% ( 13) 00:09:15.047 15022.868 - 15123.692: 98.8002% ( 10) 00:09:15.047 15123.692 - 15224.517: 98.8895% ( 16) 00:09:15.047 15224.517 - 15325.342: 99.1016% ( 38) 00:09:15.047 15325.342 - 15426.166: 99.1350% ( 6) 00:09:15.047 15426.166 - 15526.991: 99.1629% ( 5) 00:09:15.047 15526.991 - 15627.815: 99.1964% ( 6) 00:09:15.047 15627.815 - 15728.640: 99.2299% ( 6) 00:09:15.047 15728.640 - 15829.465: 99.2634% ( 6) 00:09:15.047 15829.465 - 15930.289: 99.2857% ( 4) 00:09:15.047 19459.151 - 19559.975: 99.3025% ( 3) 00:09:15.047 19559.975 - 19660.800: 99.3248% ( 4) 00:09:15.047 19660.800 - 19761.625: 99.3415% ( 3) 00:09:15.047 19761.625 - 19862.449: 99.3638% ( 4) 00:09:15.047 19862.449 - 19963.274: 99.3806% ( 3) 00:09:15.047 19963.274 - 20064.098: 99.4029% ( 4) 00:09:15.047 20064.098 - 20164.923: 99.4196% ( 3) 00:09:15.047 20164.923 - 20265.748: 99.4420% ( 4) 00:09:15.047 20265.748 - 20366.572: 99.4587% ( 3) 00:09:15.047 20366.572 - 20467.397: 99.4754% ( 3) 00:09:15.047 20467.397 - 20568.222: 99.4978% ( 4) 00:09:15.047 20568.222 - 20669.046: 99.5145% ( 3) 00:09:15.047 20669.046 - 20769.871: 99.5368% ( 4) 00:09:15.047 20769.871 - 20870.695: 99.5592% ( 4) 00:09:15.047 20870.695 - 20971.520: 99.5759% ( 3) 00:09:15.047 20971.520 - 21072.345: 99.5982% ( 4) 00:09:15.047 21072.345 - 21173.169: 99.6150% ( 3) 00:09:15.047 21173.169 - 21273.994: 99.6373% ( 4) 00:09:15.047 21273.994 - 21374.818: 99.6540% ( 3) 00:09:15.047 21374.818 - 21475.643: 99.6763% ( 4) 00:09:15.047 21475.643 - 21576.468: 99.6931% ( 3) 00:09:15.047 21576.468 - 21677.292: 99.7154% ( 4) 00:09:15.047 21677.292 - 21778.117: 99.7321% ( 3) 00:09:15.047 21778.117 - 21878.942: 99.7545% ( 4) 00:09:15.047 21878.942 - 21979.766: 99.7712% ( 3) 00:09:15.047 21979.766 - 22080.591: 99.7935% ( 4) 00:09:15.047 22080.591 - 22181.415: 99.8158% ( 4) 00:09:15.047 22181.415 - 22282.240: 99.8382% ( 4) 00:09:15.047 22282.240 - 22383.065: 99.8549% ( 3) 00:09:15.047 22383.065 - 22483.889: 99.8772% ( 4) 00:09:15.047 22483.889 - 22584.714: 99.8940% ( 3) 00:09:15.047 22584.714 - 22685.538: 99.9163% ( 4) 00:09:15.047 22685.538 - 22786.363: 99.9330% ( 3) 00:09:15.047 22786.363 - 22887.188: 99.9554% ( 4) 00:09:15.047 22887.188 - 22988.012: 99.9777% ( 4) 00:09:15.047 22988.012 - 23088.837: 100.0000% ( 4) 00:09:15.047 00:09:15.047 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:15.047 ============================================================================== 00:09:15.047 Range in us Cumulative IO count 00:09:15.047 5494.942 - 5520.148: 0.0167% ( 3) 00:09:15.047 5570.560 - 5595.766: 0.0335% ( 3) 00:09:15.047 5595.766 - 5620.972: 0.0446% ( 2) 00:09:15.047 5620.972 - 5646.178: 0.0670% ( 4) 00:09:15.047 5646.178 - 5671.385: 0.0893% ( 4) 00:09:15.047 5671.385 - 5696.591: 0.1228% ( 6) 00:09:15.047 5696.591 - 5721.797: 0.1897% ( 12) 00:09:15.047 5721.797 - 5747.003: 0.2902% ( 18) 00:09:15.047 5747.003 - 5772.209: 0.3795% ( 16) 00:09:15.047 5772.209 - 5797.415: 0.5134% ( 24) 00:09:15.047 5797.415 - 5822.622: 0.6306% ( 21) 00:09:15.047 5822.622 - 5847.828: 0.7589% ( 23) 00:09:15.047 5847.828 - 5873.034: 0.9319% ( 31) 00:09:15.047 5873.034 - 5898.240: 1.1328% ( 36) 00:09:15.047 5898.240 - 5923.446: 1.3560% ( 40) 00:09:15.047 5923.446 - 5948.652: 1.6797% ( 58) 00:09:15.047 5948.652 - 5973.858: 2.0647% ( 69) 00:09:15.047 5973.858 - 5999.065: 2.8292% ( 137) 00:09:15.047 5999.065 - 6024.271: 3.4431% ( 110) 00:09:15.047 6024.271 - 6049.477: 4.1239% ( 122) 00:09:15.047 6049.477 - 6074.683: 4.8884% ( 137) 00:09:15.047 6074.683 - 6099.889: 5.8092% ( 165) 00:09:15.047 6099.889 - 6125.095: 7.0815% ( 228) 00:09:15.047 6125.095 - 6150.302: 8.5100% ( 256) 00:09:15.047 6150.302 - 6175.508: 9.7210% ( 217) 00:09:15.047 6175.508 - 6200.714: 11.0324% ( 235) 00:09:15.047 6200.714 - 6225.920: 12.5167% ( 266) 00:09:15.047 6225.920 - 6251.126: 13.8616% ( 241) 00:09:15.047 6251.126 - 6276.332: 15.3125% ( 260) 00:09:15.047 6276.332 - 6301.538: 16.6797% ( 245) 00:09:15.047 6301.538 - 6326.745: 18.2143% ( 275) 00:09:15.047 6326.745 - 6351.951: 19.8493% ( 293) 00:09:15.047 6351.951 - 6377.157: 21.5346% ( 302) 00:09:15.047 6377.157 - 6402.363: 23.5603% ( 363) 00:09:15.047 6402.363 - 6427.569: 26.0212% ( 441) 00:09:15.047 6427.569 - 6452.775: 28.2478% ( 399) 00:09:15.047 6452.775 - 6503.188: 33.4654% ( 935) 00:09:15.047 6503.188 - 6553.600: 38.7444% ( 946) 00:09:15.047 6553.600 - 6604.012: 42.8571% ( 737) 00:09:15.047 6604.012 - 6654.425: 46.8694% ( 719) 00:09:15.047 6654.425 - 6704.837: 50.6194% ( 672) 00:09:15.047 6704.837 - 6755.249: 54.5926% ( 712) 00:09:15.047 6755.249 - 6805.662: 57.9018% ( 593) 00:09:15.047 6805.662 - 6856.074: 61.1049% ( 574) 00:09:15.047 6856.074 - 6906.486: 64.8270% ( 667) 00:09:15.047 6906.486 - 6956.898: 68.0301% ( 574) 00:09:15.047 6956.898 - 7007.311: 71.5792% ( 636) 00:09:15.047 7007.311 - 7057.723: 74.4531% ( 515) 00:09:15.047 7057.723 - 7108.135: 77.0926% ( 473) 00:09:15.047 7108.135 - 7158.548: 79.0011% ( 342) 00:09:15.047 7158.548 - 7208.960: 80.8594% ( 333) 00:09:15.047 7208.960 - 7259.372: 82.8348% ( 354) 00:09:15.047 7259.372 - 7309.785: 84.2913% ( 261) 00:09:15.047 7309.785 - 7360.197: 85.5580% ( 227) 00:09:15.047 7360.197 - 7410.609: 86.4900% ( 167) 00:09:15.047 7410.609 - 7461.022: 87.2768% ( 141) 00:09:15.048 7461.022 - 7511.434: 87.9967% ( 129) 00:09:15.048 7511.434 - 7561.846: 88.6384% ( 115) 00:09:15.048 7561.846 - 7612.258: 89.2299% ( 106) 00:09:15.048 7612.258 - 7662.671: 89.7656% ( 96) 00:09:15.048 7662.671 - 7713.083: 90.2846% ( 93) 00:09:15.048 7713.083 - 7763.495: 90.6920% ( 73) 00:09:15.048 7763.495 - 7813.908: 91.1998% ( 91) 00:09:15.048 7813.908 - 7864.320: 91.6239% ( 76) 00:09:15.048 7864.320 - 7914.732: 91.7969% ( 31) 00:09:15.048 7914.732 - 7965.145: 92.0480% ( 45) 00:09:15.048 7965.145 - 8015.557: 92.2042% ( 28) 00:09:15.048 8015.557 - 8065.969: 92.2879% ( 15) 00:09:15.048 8065.969 - 8116.382: 92.4163% ( 23) 00:09:15.048 8116.382 - 8166.794: 92.5391% ( 22) 00:09:15.048 8166.794 - 8217.206: 92.6116% ( 13) 00:09:15.048 8217.206 - 8267.618: 92.6730% ( 11) 00:09:15.048 8267.618 - 8318.031: 92.7288% ( 10) 00:09:15.048 8318.031 - 8368.443: 92.7958% ( 12) 00:09:15.048 8368.443 - 8418.855: 92.8516% ( 10) 00:09:15.048 8418.855 - 8469.268: 92.9185% ( 12) 00:09:15.048 8469.268 - 8519.680: 92.9743% ( 10) 00:09:15.048 8519.680 - 8570.092: 93.0469% ( 13) 00:09:15.048 8570.092 - 8620.505: 93.0915% ( 8) 00:09:15.048 8620.505 - 8670.917: 93.1306% ( 7) 00:09:15.048 8670.917 - 8721.329: 93.1529% ( 4) 00:09:15.048 8721.329 - 8771.742: 93.1864% ( 6) 00:09:15.048 8771.742 - 8822.154: 93.2087% ( 4) 00:09:15.048 8822.154 - 8872.566: 93.2310% ( 4) 00:09:15.048 8872.566 - 8922.978: 93.2478% ( 3) 00:09:15.048 8922.978 - 8973.391: 93.2701% ( 4) 00:09:15.048 8973.391 - 9023.803: 93.2868% ( 3) 00:09:15.048 9023.803 - 9074.215: 93.3092% ( 4) 00:09:15.048 9074.215 - 9124.628: 93.3259% ( 3) 00:09:15.048 9124.628 - 9175.040: 93.3426% ( 3) 00:09:15.048 9175.040 - 9225.452: 93.3650% ( 4) 00:09:15.048 9225.452 - 9275.865: 93.3817% ( 3) 00:09:15.048 9275.865 - 9326.277: 93.4040% ( 4) 00:09:15.048 9326.277 - 9376.689: 93.4208% ( 3) 00:09:15.048 9376.689 - 9427.102: 93.4431% ( 4) 00:09:15.048 9427.102 - 9477.514: 93.4598% ( 3) 00:09:15.048 9477.514 - 9527.926: 93.4821% ( 4) 00:09:15.048 9527.926 - 9578.338: 93.4989% ( 3) 00:09:15.048 9578.338 - 9628.751: 93.5156% ( 3) 00:09:15.048 9628.751 - 9679.163: 93.5379% ( 4) 00:09:15.048 9679.163 - 9729.575: 93.5547% ( 3) 00:09:15.048 9729.575 - 9779.988: 93.5714% ( 3) 00:09:15.048 10132.874 - 10183.286: 93.5770% ( 1) 00:09:15.048 10233.698 - 10284.111: 93.5938% ( 3) 00:09:15.048 10284.111 - 10334.523: 93.6161% ( 4) 00:09:15.048 10334.523 - 10384.935: 93.6384% ( 4) 00:09:15.048 10384.935 - 10435.348: 93.6663% ( 5) 00:09:15.048 10435.348 - 10485.760: 93.6942% ( 5) 00:09:15.048 10485.760 - 10536.172: 93.7054% ( 2) 00:09:15.048 10536.172 - 10586.585: 93.7109% ( 1) 00:09:15.048 10586.585 - 10636.997: 93.7277% ( 3) 00:09:15.048 10636.997 - 10687.409: 93.7333% ( 1) 00:09:15.048 10687.409 - 10737.822: 93.7500% ( 3) 00:09:15.048 10737.822 - 10788.234: 93.8002% ( 9) 00:09:15.048 10788.234 - 10838.646: 93.8504% ( 9) 00:09:15.048 10838.646 - 10889.058: 93.9062% ( 10) 00:09:15.048 10889.058 - 10939.471: 93.9676% ( 11) 00:09:15.048 10939.471 - 10989.883: 94.0569% ( 16) 00:09:15.048 10989.883 - 11040.295: 94.1462% ( 16) 00:09:15.048 11040.295 - 11090.708: 94.2132% ( 12) 00:09:15.048 11090.708 - 11141.120: 94.2857% ( 13) 00:09:15.048 11141.120 - 11191.532: 94.3527% ( 12) 00:09:15.048 11191.532 - 11241.945: 94.5033% ( 27) 00:09:15.048 11241.945 - 11292.357: 94.8661% ( 65) 00:09:15.048 11292.357 - 11342.769: 94.9498% ( 15) 00:09:15.048 11342.769 - 11393.182: 94.9944% ( 8) 00:09:15.048 11393.182 - 11443.594: 95.0391% ( 8) 00:09:15.048 11443.594 - 11494.006: 95.0781% ( 7) 00:09:15.048 11494.006 - 11544.418: 95.1228% ( 8) 00:09:15.048 11544.418 - 11594.831: 95.1730% ( 9) 00:09:15.048 11594.831 - 11645.243: 95.2121% ( 7) 00:09:15.048 11645.243 - 11695.655: 95.2623% ( 9) 00:09:15.048 11695.655 - 11746.068: 95.3069% ( 8) 00:09:15.048 11746.068 - 11796.480: 95.3516% ( 8) 00:09:15.048 11796.480 - 11846.892: 95.3850% ( 6) 00:09:15.048 11846.892 - 11897.305: 95.4297% ( 8) 00:09:15.048 11897.305 - 11947.717: 95.4911% ( 11) 00:09:15.048 11947.717 - 11998.129: 95.5413% ( 9) 00:09:15.048 11998.129 - 12048.542: 95.5859% ( 8) 00:09:15.048 12048.542 - 12098.954: 95.6138% ( 5) 00:09:15.048 12098.954 - 12149.366: 95.6641% ( 9) 00:09:15.048 12149.366 - 12199.778: 95.7366% ( 13) 00:09:15.048 12199.778 - 12250.191: 95.8092% ( 13) 00:09:15.048 12250.191 - 12300.603: 95.8426% ( 6) 00:09:15.048 12300.603 - 12351.015: 95.8817% ( 7) 00:09:15.048 12351.015 - 12401.428: 95.9319% ( 9) 00:09:15.048 12401.428 - 12451.840: 95.9821% ( 9) 00:09:15.048 12451.840 - 12502.252: 96.0379% ( 10) 00:09:15.048 12502.252 - 12552.665: 96.0882% ( 9) 00:09:15.048 12552.665 - 12603.077: 96.1384% ( 9) 00:09:15.048 12603.077 - 12653.489: 96.1942% ( 10) 00:09:15.048 12653.489 - 12703.902: 96.2444% ( 9) 00:09:15.048 12703.902 - 12754.314: 96.2891% ( 8) 00:09:15.048 12754.314 - 12804.726: 96.3281% ( 7) 00:09:15.048 12804.726 - 12855.138: 96.3895% ( 11) 00:09:15.048 12855.138 - 12905.551: 96.4397% ( 9) 00:09:15.048 12905.551 - 13006.375: 96.5681% ( 23) 00:09:15.048 13006.375 - 13107.200: 96.6853% ( 21) 00:09:15.048 13107.200 - 13208.025: 96.8192% ( 24) 00:09:15.048 13208.025 - 13308.849: 96.9475% ( 23) 00:09:15.048 13308.849 - 13409.674: 97.3549% ( 73) 00:09:15.048 13409.674 - 13510.498: 97.5056% ( 27) 00:09:15.048 13510.498 - 13611.323: 97.6618% ( 28) 00:09:15.048 13611.323 - 13712.148: 97.8795% ( 39) 00:09:15.048 13712.148 - 13812.972: 98.0469% ( 30) 00:09:15.048 13812.972 - 13913.797: 98.3817% ( 60) 00:09:15.048 13913.797 - 14014.622: 98.4989% ( 21) 00:09:15.048 14014.622 - 14115.446: 98.5993% ( 18) 00:09:15.048 14115.446 - 14216.271: 98.7165% ( 21) 00:09:15.048 14216.271 - 14317.095: 98.7667% ( 9) 00:09:15.048 14317.095 - 14417.920: 98.8170% ( 9) 00:09:15.048 14417.920 - 14518.745: 98.8728% ( 10) 00:09:15.048 14518.745 - 14619.569: 98.9230% ( 9) 00:09:15.048 14619.569 - 14720.394: 98.9732% ( 9) 00:09:15.048 14720.394 - 14821.218: 99.0290% ( 10) 00:09:15.048 14821.218 - 14922.043: 99.0792% ( 9) 00:09:15.048 14922.043 - 15022.868: 99.1127% ( 6) 00:09:15.048 15022.868 - 15123.692: 99.1295% ( 3) 00:09:15.048 15123.692 - 15224.517: 99.1462% ( 3) 00:09:15.048 15224.517 - 15325.342: 99.1685% ( 4) 00:09:15.048 15325.342 - 15426.166: 99.1853% ( 3) 00:09:15.048 15426.166 - 15526.991: 99.2020% ( 3) 00:09:15.048 15526.991 - 15627.815: 99.2188% ( 3) 00:09:15.048 15627.815 - 15728.640: 99.2411% ( 4) 00:09:15.048 15728.640 - 15829.465: 99.2522% ( 2) 00:09:15.048 15829.465 - 15930.289: 99.2690% ( 3) 00:09:15.048 15930.289 - 16031.114: 99.2801% ( 2) 00:09:15.048 16031.114 - 16131.938: 99.2857% ( 1) 00:09:15.048 18955.028 - 19055.852: 99.2913% ( 1) 00:09:15.048 19156.677 - 19257.502: 99.3192% ( 5) 00:09:15.048 19257.502 - 19358.326: 99.4252% ( 19) 00:09:15.048 19358.326 - 19459.151: 99.5647% ( 25) 00:09:15.048 19459.151 - 19559.975: 99.6429% ( 14) 00:09:15.048 19559.975 - 19660.800: 99.6875% ( 8) 00:09:15.048 19660.800 - 19761.625: 99.6987% ( 2) 00:09:15.048 19761.625 - 19862.449: 99.7210% ( 4) 00:09:15.048 19862.449 - 19963.274: 99.7377% ( 3) 00:09:15.048 19963.274 - 20064.098: 99.7545% ( 3) 00:09:15.048 20064.098 - 20164.923: 99.7768% ( 4) 00:09:15.048 20164.923 - 20265.748: 99.7879% ( 2) 00:09:15.048 20265.748 - 20366.572: 99.8103% ( 4) 00:09:15.048 20366.572 - 20467.397: 99.8270% ( 3) 00:09:15.048 20467.397 - 20568.222: 99.8438% ( 3) 00:09:15.048 20568.222 - 20669.046: 99.8661% ( 4) 00:09:15.048 20669.046 - 20769.871: 99.8772% ( 2) 00:09:15.048 20769.871 - 20870.695: 99.8996% ( 4) 00:09:15.048 20870.695 - 20971.520: 99.9163% ( 3) 00:09:15.048 20971.520 - 21072.345: 99.9330% ( 3) 00:09:15.048 21072.345 - 21173.169: 99.9498% ( 3) 00:09:15.048 21173.169 - 21273.994: 99.9665% ( 3) 00:09:15.048 21273.994 - 21374.818: 99.9833% ( 3) 00:09:15.048 21374.818 - 21475.643: 100.0000% ( 3) 00:09:15.048 00:09:15.049 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:15.049 ============================================================================== 00:09:15.049 Range in us Cumulative IO count 00:09:15.049 5368.911 - 5394.117: 0.0056% ( 1) 00:09:15.049 5570.560 - 5595.766: 0.0223% ( 3) 00:09:15.049 5595.766 - 5620.972: 0.0335% ( 2) 00:09:15.049 5620.972 - 5646.178: 0.0781% ( 8) 00:09:15.049 5646.178 - 5671.385: 0.1395% ( 11) 00:09:15.049 5671.385 - 5696.591: 0.1953% ( 10) 00:09:15.049 5696.591 - 5721.797: 0.2567% ( 11) 00:09:15.049 5721.797 - 5747.003: 0.2958% ( 7) 00:09:15.049 5747.003 - 5772.209: 0.3516% ( 10) 00:09:15.049 5772.209 - 5797.415: 0.4353% ( 15) 00:09:15.049 5797.415 - 5822.622: 0.5301% ( 17) 00:09:15.049 5822.622 - 5847.828: 0.6752% ( 26) 00:09:15.049 5847.828 - 5873.034: 0.8315% ( 28) 00:09:15.049 5873.034 - 5898.240: 1.0212% ( 34) 00:09:15.049 5898.240 - 5923.446: 1.2333% ( 38) 00:09:15.049 5923.446 - 5948.652: 1.5346% ( 54) 00:09:15.049 5948.652 - 5973.858: 1.8694% ( 60) 00:09:15.049 5973.858 - 5999.065: 2.4944% ( 112) 00:09:15.049 5999.065 - 6024.271: 3.0636% ( 102) 00:09:15.049 6024.271 - 6049.477: 3.6607% ( 107) 00:09:15.049 6049.477 - 6074.683: 4.3080% ( 116) 00:09:15.049 6074.683 - 6099.889: 5.3069% ( 179) 00:09:15.049 6099.889 - 6125.095: 6.1440% ( 150) 00:09:15.049 6125.095 - 6150.302: 7.5781% ( 257) 00:09:15.049 6150.302 - 6175.508: 8.8225% ( 223) 00:09:15.049 6175.508 - 6200.714: 9.9888% ( 209) 00:09:15.049 6200.714 - 6225.920: 11.2500% ( 226) 00:09:15.049 6225.920 - 6251.126: 12.6953% ( 259) 00:09:15.049 6251.126 - 6276.332: 13.9342% ( 222) 00:09:15.049 6276.332 - 6301.538: 15.3404% ( 252) 00:09:15.049 6301.538 - 6326.745: 17.1763% ( 329) 00:09:15.049 6326.745 - 6351.951: 19.1071% ( 346) 00:09:15.049 6351.951 - 6377.157: 20.8538% ( 313) 00:09:15.049 6377.157 - 6402.363: 22.7567% ( 341) 00:09:15.049 6402.363 - 6427.569: 25.1283% ( 425) 00:09:15.049 6427.569 - 6452.775: 27.7902% ( 477) 00:09:15.049 6452.775 - 6503.188: 32.8683% ( 910) 00:09:15.049 6503.188 - 6553.600: 38.5993% ( 1027) 00:09:15.049 6553.600 - 6604.012: 42.9129% ( 773) 00:09:15.049 6604.012 - 6654.425: 46.9810% ( 729) 00:09:15.049 6654.425 - 6704.837: 50.5190% ( 634) 00:09:15.049 6704.837 - 6755.249: 54.2690% ( 672) 00:09:15.049 6755.249 - 6805.662: 58.2478% ( 713) 00:09:15.049 6805.662 - 6856.074: 62.2210% ( 712) 00:09:15.049 6856.074 - 6906.486: 66.0212% ( 681) 00:09:15.049 6906.486 - 6956.898: 68.9955% ( 533) 00:09:15.049 6956.898 - 7007.311: 71.9587% ( 531) 00:09:15.049 7007.311 - 7057.723: 75.1842% ( 578) 00:09:15.049 7057.723 - 7108.135: 77.8237% ( 473) 00:09:15.049 7108.135 - 7158.548: 80.0670% ( 402) 00:09:15.049 7158.548 - 7208.960: 82.0982% ( 364) 00:09:15.049 7208.960 - 7259.372: 83.6496% ( 278) 00:09:15.049 7259.372 - 7309.785: 84.9275% ( 229) 00:09:15.049 7309.785 - 7360.197: 86.0770% ( 206) 00:09:15.049 7360.197 - 7410.609: 86.9141% ( 150) 00:09:15.049 7410.609 - 7461.022: 87.6953% ( 140) 00:09:15.049 7461.022 - 7511.434: 88.4040% ( 127) 00:09:15.049 7511.434 - 7561.846: 89.0960% ( 124) 00:09:15.049 7561.846 - 7612.258: 89.7154% ( 111) 00:09:15.049 7612.258 - 7662.671: 90.2288% ( 92) 00:09:15.049 7662.671 - 7713.083: 90.6306% ( 72) 00:09:15.049 7713.083 - 7763.495: 91.1328% ( 90) 00:09:15.049 7763.495 - 7813.908: 91.5402% ( 73) 00:09:15.049 7813.908 - 7864.320: 91.7634% ( 40) 00:09:15.049 7864.320 - 7914.732: 91.8750% ( 20) 00:09:15.049 7914.732 - 7965.145: 91.9475% ( 13) 00:09:15.049 7965.145 - 8015.557: 92.0033% ( 10) 00:09:15.049 8015.557 - 8065.969: 92.0312% ( 5) 00:09:15.049 8065.969 - 8116.382: 92.0592% ( 5) 00:09:15.049 8116.382 - 8166.794: 92.0815% ( 4) 00:09:15.049 8166.794 - 8217.206: 92.0982% ( 3) 00:09:15.049 8217.206 - 8267.618: 92.1261% ( 5) 00:09:15.049 8267.618 - 8318.031: 92.1931% ( 12) 00:09:15.049 8318.031 - 8368.443: 92.2712% ( 14) 00:09:15.049 8368.443 - 8418.855: 92.3326% ( 11) 00:09:15.049 8418.855 - 8469.268: 92.3996% ( 12) 00:09:15.049 8469.268 - 8519.680: 92.4330% ( 6) 00:09:15.049 8519.680 - 8570.092: 92.4609% ( 5) 00:09:15.049 8570.092 - 8620.505: 92.4833% ( 4) 00:09:15.049 8620.505 - 8670.917: 92.5112% ( 5) 00:09:15.049 8670.917 - 8721.329: 92.5335% ( 4) 00:09:15.049 8721.329 - 8771.742: 92.5614% ( 5) 00:09:15.049 8771.742 - 8822.154: 92.5893% ( 5) 00:09:15.049 8822.154 - 8872.566: 92.6116% ( 4) 00:09:15.049 8872.566 - 8922.978: 92.6395% ( 5) 00:09:15.049 8922.978 - 8973.391: 92.6730% ( 6) 00:09:15.049 8973.391 - 9023.803: 92.7009% ( 5) 00:09:15.049 9023.803 - 9074.215: 92.7400% ( 7) 00:09:15.049 9074.215 - 9124.628: 92.7623% ( 4) 00:09:15.049 9124.628 - 9175.040: 92.7958% ( 6) 00:09:15.049 9175.040 - 9225.452: 92.8237% ( 5) 00:09:15.049 9225.452 - 9275.865: 92.8571% ( 6) 00:09:15.049 9275.865 - 9326.277: 92.8906% ( 6) 00:09:15.049 9326.277 - 9376.689: 92.9129% ( 4) 00:09:15.049 9376.689 - 9427.102: 92.9464% ( 6) 00:09:15.049 9427.102 - 9477.514: 92.9743% ( 5) 00:09:15.049 9477.514 - 9527.926: 93.0134% ( 7) 00:09:15.049 9527.926 - 9578.338: 93.0357% ( 4) 00:09:15.049 9578.338 - 9628.751: 93.0692% ( 6) 00:09:15.049 9628.751 - 9679.163: 93.1027% ( 6) 00:09:15.049 9679.163 - 9729.575: 93.1306% ( 5) 00:09:15.049 9729.575 - 9779.988: 93.1641% ( 6) 00:09:15.049 9779.988 - 9830.400: 93.2031% ( 7) 00:09:15.049 9830.400 - 9880.812: 93.2701% ( 12) 00:09:15.049 9880.812 - 9931.225: 93.3315% ( 11) 00:09:15.049 9931.225 - 9981.637: 93.4040% ( 13) 00:09:15.049 9981.637 - 10032.049: 93.4598% ( 10) 00:09:15.049 10032.049 - 10082.462: 93.5100% ( 9) 00:09:15.049 10082.462 - 10132.874: 93.5714% ( 11) 00:09:15.049 10132.874 - 10183.286: 93.6161% ( 8) 00:09:15.049 10183.286 - 10233.698: 93.6496% ( 6) 00:09:15.049 10233.698 - 10284.111: 93.6775% ( 5) 00:09:15.049 10284.111 - 10334.523: 93.7221% ( 8) 00:09:15.049 10334.523 - 10384.935: 93.7556% ( 6) 00:09:15.049 10384.935 - 10435.348: 93.7779% ( 4) 00:09:15.049 10435.348 - 10485.760: 93.7891% ( 2) 00:09:15.049 10485.760 - 10536.172: 93.8058% ( 3) 00:09:15.049 10536.172 - 10586.585: 93.8170% ( 2) 00:09:15.049 10586.585 - 10636.997: 93.8281% ( 2) 00:09:15.049 10636.997 - 10687.409: 93.8449% ( 3) 00:09:15.049 10687.409 - 10737.822: 93.8560% ( 2) 00:09:15.049 10737.822 - 10788.234: 93.8728% ( 3) 00:09:15.049 10788.234 - 10838.646: 93.8839% ( 2) 00:09:15.049 10838.646 - 10889.058: 93.9007% ( 3) 00:09:15.049 10889.058 - 10939.471: 93.9118% ( 2) 00:09:15.049 10939.471 - 10989.883: 93.9286% ( 3) 00:09:15.049 10989.883 - 11040.295: 93.9397% ( 2) 00:09:15.049 11040.295 - 11090.708: 93.9565% ( 3) 00:09:15.049 11090.708 - 11141.120: 93.9676% ( 2) 00:09:15.049 11141.120 - 11191.532: 93.9844% ( 3) 00:09:15.049 11191.532 - 11241.945: 94.0179% ( 6) 00:09:15.049 11241.945 - 11292.357: 94.0513% ( 6) 00:09:15.049 11292.357 - 11342.769: 94.0848% ( 6) 00:09:15.049 11342.769 - 11393.182: 94.1239% ( 7) 00:09:15.049 11393.182 - 11443.594: 94.1574% ( 6) 00:09:15.049 11443.594 - 11494.006: 94.1908% ( 6) 00:09:15.049 11494.006 - 11544.418: 94.3917% ( 36) 00:09:15.049 11544.418 - 11594.831: 94.4364% ( 8) 00:09:15.049 11594.831 - 11645.243: 94.4810% ( 8) 00:09:15.049 11645.243 - 11695.655: 94.5201% ( 7) 00:09:15.049 11695.655 - 11746.068: 94.5871% ( 12) 00:09:15.049 11746.068 - 11796.480: 94.6540% ( 12) 00:09:15.050 11796.480 - 11846.892: 94.7266% ( 13) 00:09:15.050 11846.892 - 11897.305: 94.8103% ( 15) 00:09:15.050 11897.305 - 11947.717: 94.8940% ( 15) 00:09:15.050 11947.717 - 11998.129: 94.9665% ( 13) 00:09:15.050 11998.129 - 12048.542: 95.0502% ( 15) 00:09:15.050 12048.542 - 12098.954: 95.1562% ( 19) 00:09:15.050 12098.954 - 12149.366: 95.2846% ( 23) 00:09:15.050 12149.366 - 12199.778: 95.5413% ( 46) 00:09:15.050 12199.778 - 12250.191: 95.6250% ( 15) 00:09:15.050 12250.191 - 12300.603: 95.8817% ( 46) 00:09:15.050 12300.603 - 12351.015: 95.9654% ( 15) 00:09:15.050 12351.015 - 12401.428: 96.0268% ( 11) 00:09:15.050 12401.428 - 12451.840: 96.0882% ( 11) 00:09:15.050 12451.840 - 12502.252: 96.1607% ( 13) 00:09:15.050 12502.252 - 12552.665: 96.2333% ( 13) 00:09:15.050 12552.665 - 12603.077: 96.3114% ( 14) 00:09:15.050 12603.077 - 12653.489: 96.3839% ( 13) 00:09:15.050 12653.489 - 12703.902: 96.4621% ( 14) 00:09:15.050 12703.902 - 12754.314: 96.5290% ( 12) 00:09:15.050 12754.314 - 12804.726: 96.6462% ( 21) 00:09:15.050 12804.726 - 12855.138: 96.7411% ( 17) 00:09:15.050 12855.138 - 12905.551: 96.8415% ( 18) 00:09:15.050 12905.551 - 13006.375: 97.0926% ( 45) 00:09:15.050 13006.375 - 13107.200: 97.2991% ( 37) 00:09:15.050 13107.200 - 13208.025: 97.6283% ( 59) 00:09:15.050 13208.025 - 13308.849: 98.0134% ( 69) 00:09:15.050 13308.849 - 13409.674: 98.1417% ( 23) 00:09:15.050 13409.674 - 13510.498: 98.2645% ( 22) 00:09:15.050 13510.498 - 13611.323: 98.3761% ( 20) 00:09:15.050 13611.323 - 13712.148: 98.4877% ( 20) 00:09:15.050 13712.148 - 13812.972: 98.6049% ( 21) 00:09:15.050 13812.972 - 13913.797: 98.7221% ( 21) 00:09:15.050 13913.797 - 14014.622: 98.8114% ( 16) 00:09:15.050 14014.622 - 14115.446: 98.9007% ( 16) 00:09:15.050 14115.446 - 14216.271: 98.9844% ( 15) 00:09:15.050 14216.271 - 14317.095: 99.0792% ( 17) 00:09:15.050 14317.095 - 14417.920: 99.1574% ( 14) 00:09:15.050 14417.920 - 14518.745: 99.2076% ( 9) 00:09:15.050 14518.745 - 14619.569: 99.2467% ( 7) 00:09:15.050 14619.569 - 14720.394: 99.2801% ( 6) 00:09:15.050 14720.394 - 14821.218: 99.2857% ( 1) 00:09:15.050 17644.308 - 17745.132: 99.2913% ( 1) 00:09:15.050 17745.132 - 17845.957: 99.3080% ( 3) 00:09:15.050 17845.957 - 17946.782: 99.3359% ( 5) 00:09:15.050 17946.782 - 18047.606: 99.3527% ( 3) 00:09:15.050 18047.606 - 18148.431: 99.3862% ( 6) 00:09:15.050 18148.431 - 18249.255: 99.4308% ( 8) 00:09:15.050 18249.255 - 18350.080: 99.4922% ( 11) 00:09:15.050 18350.080 - 18450.905: 99.6150% ( 22) 00:09:15.050 18450.905 - 18551.729: 99.7210% ( 19) 00:09:15.050 18551.729 - 18652.554: 99.7656% ( 8) 00:09:15.050 18652.554 - 18753.378: 99.7879% ( 4) 00:09:15.050 18753.378 - 18854.203: 99.8047% ( 3) 00:09:15.050 18854.203 - 18955.028: 99.8214% ( 3) 00:09:15.050 18955.028 - 19055.852: 99.8382% ( 3) 00:09:15.050 19055.852 - 19156.677: 99.8605% ( 4) 00:09:15.050 19156.677 - 19257.502: 99.8772% ( 3) 00:09:15.050 19257.502 - 19358.326: 99.8940% ( 3) 00:09:15.050 19358.326 - 19459.151: 99.9107% ( 3) 00:09:15.050 19459.151 - 19559.975: 99.9275% ( 3) 00:09:15.050 19559.975 - 19660.800: 99.9442% ( 3) 00:09:15.050 19660.800 - 19761.625: 99.9665% ( 4) 00:09:15.050 19761.625 - 19862.449: 99.9833% ( 3) 00:09:15.050 19862.449 - 19963.274: 100.0000% ( 3) 00:09:15.050 00:09:15.050 14:10:13 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:09:15.050 00:09:15.050 real 0m2.629s 00:09:15.050 user 0m2.314s 00:09:15.050 sys 0m0.206s 00:09:15.050 14:10:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:15.050 14:10:13 -- common/autotest_common.sh@10 -- # set +x 00:09:15.050 ************************************ 00:09:15.050 END TEST nvme_perf 00:09:15.050 ************************************ 00:09:15.050 14:10:13 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:15.050 14:10:13 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:15.050 14:10:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:15.050 14:10:13 -- common/autotest_common.sh@10 -- # set +x 00:09:15.050 ************************************ 00:09:15.050 START TEST nvme_hello_world 00:09:15.050 ************************************ 00:09:15.050 14:10:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:15.050 Initializing NVMe Controllers 00:09:15.050 Attached to 0000:00:06.0 00:09:15.050 Namespace ID: 1 size: 6GB 00:09:15.050 Attached to 0000:00:07.0 00:09:15.050 Namespace ID: 1 size: 5GB 00:09:15.050 Attached to 0000:00:09.0 00:09:15.050 Namespace ID: 1 size: 1GB 00:09:15.050 Attached to 0000:00:08.0 00:09:15.050 Namespace ID: 1 size: 4GB 00:09:15.050 Namespace ID: 2 size: 4GB 00:09:15.050 Namespace ID: 3 size: 4GB 00:09:15.050 Initialization complete. 00:09:15.050 INFO: using host memory buffer for IO 00:09:15.050 Hello world! 00:09:15.050 INFO: using host memory buffer for IO 00:09:15.050 Hello world! 00:09:15.050 INFO: using host memory buffer for IO 00:09:15.050 Hello world! 00:09:15.050 INFO: using host memory buffer for IO 00:09:15.050 Hello world! 00:09:15.050 INFO: using host memory buffer for IO 00:09:15.050 Hello world! 00:09:15.050 INFO: using host memory buffer for IO 00:09:15.050 Hello world! 00:09:15.050 00:09:15.050 real 0m0.264s 00:09:15.050 user 0m0.125s 00:09:15.050 sys 0m0.095s 00:09:15.050 14:10:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:15.050 14:10:13 -- common/autotest_common.sh@10 -- # set +x 00:09:15.050 ************************************ 00:09:15.050 END TEST nvme_hello_world 00:09:15.050 ************************************ 00:09:15.050 14:10:13 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:15.050 14:10:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:15.050 14:10:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:15.050 14:10:13 -- common/autotest_common.sh@10 -- # set +x 00:09:15.050 ************************************ 00:09:15.050 START TEST nvme_sgl 00:09:15.050 ************************************ 00:09:15.050 14:10:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:15.309 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:09:15.309 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:09:15.309 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:09:15.309 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:09:15.309 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:09:15.309 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:09:15.309 0000:00:07.0: build_io_request_0 Invalid IO length parameter 00:09:15.309 0000:00:07.0: build_io_request_1 Invalid IO length parameter 00:09:15.567 0000:00:07.0: build_io_request_3 Invalid IO length parameter 00:09:15.567 0000:00:07.0: build_io_request_8 Invalid IO length parameter 00:09:15.567 0000:00:07.0: build_io_request_9 Invalid IO length parameter 00:09:15.567 0000:00:07.0: build_io_request_11 Invalid IO length parameter 00:09:15.567 0000:00:09.0: build_io_request_0 Invalid IO length parameter 00:09:15.567 0000:00:09.0: build_io_request_1 Invalid IO length parameter 00:09:15.567 0000:00:09.0: build_io_request_2 Invalid IO length parameter 00:09:15.567 0000:00:09.0: build_io_request_3 Invalid IO length parameter 00:09:15.567 0000:00:09.0: build_io_request_4 Invalid IO length parameter 00:09:15.567 0000:00:09.0: build_io_request_5 Invalid IO length parameter 00:09:15.567 0000:00:09.0: build_io_request_6 Invalid IO length parameter 00:09:15.567 0000:00:09.0: build_io_request_7 Invalid IO length parameter 00:09:15.567 0000:00:09.0: build_io_request_8 Invalid IO length parameter 00:09:15.567 0000:00:09.0: build_io_request_9 Invalid IO length parameter 00:09:15.567 0000:00:09.0: build_io_request_10 Invalid IO length parameter 00:09:15.567 0000:00:09.0: build_io_request_11 Invalid IO length parameter 00:09:15.567 0000:00:08.0: build_io_request_0 Invalid IO length parameter 00:09:15.567 0000:00:08.0: build_io_request_1 Invalid IO length parameter 00:09:15.567 0000:00:08.0: build_io_request_2 Invalid IO length parameter 00:09:15.567 0000:00:08.0: build_io_request_3 Invalid IO length parameter 00:09:15.567 0000:00:08.0: build_io_request_4 Invalid IO length parameter 00:09:15.567 0000:00:08.0: build_io_request_5 Invalid IO length parameter 00:09:15.567 0000:00:08.0: build_io_request_6 Invalid IO length parameter 00:09:15.567 0000:00:08.0: build_io_request_7 Invalid IO length parameter 00:09:15.567 0000:00:08.0: build_io_request_8 Invalid IO length parameter 00:09:15.567 0000:00:08.0: build_io_request_9 Invalid IO length parameter 00:09:15.567 0000:00:08.0: build_io_request_10 Invalid IO length parameter 00:09:15.567 0000:00:08.0: build_io_request_11 Invalid IO length parameter 00:09:15.567 NVMe Readv/Writev Request test 00:09:15.567 Attached to 0000:00:06.0 00:09:15.567 Attached to 0000:00:07.0 00:09:15.567 Attached to 0000:00:09.0 00:09:15.567 Attached to 0000:00:08.0 00:09:15.567 0000:00:06.0: build_io_request_2 test passed 00:09:15.567 0000:00:06.0: build_io_request_4 test passed 00:09:15.567 0000:00:06.0: build_io_request_5 test passed 00:09:15.567 0000:00:06.0: build_io_request_6 test passed 00:09:15.567 0000:00:06.0: build_io_request_7 test passed 00:09:15.568 0000:00:06.0: build_io_request_10 test passed 00:09:15.568 0000:00:07.0: build_io_request_2 test passed 00:09:15.568 0000:00:07.0: build_io_request_4 test passed 00:09:15.568 0000:00:07.0: build_io_request_5 test passed 00:09:15.568 0000:00:07.0: build_io_request_6 test passed 00:09:15.568 0000:00:07.0: build_io_request_7 test passed 00:09:15.568 0000:00:07.0: build_io_request_10 test passed 00:09:15.568 Cleaning up... 00:09:15.568 ************************************ 00:09:15.568 END TEST nvme_sgl 00:09:15.568 ************************************ 00:09:15.568 00:09:15.568 real 0m0.380s 00:09:15.568 user 0m0.234s 00:09:15.568 sys 0m0.096s 00:09:15.568 14:10:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:15.568 14:10:13 -- common/autotest_common.sh@10 -- # set +x 00:09:15.568 14:10:14 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:15.568 14:10:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:15.568 14:10:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:15.568 14:10:14 -- common/autotest_common.sh@10 -- # set +x 00:09:15.568 ************************************ 00:09:15.568 START TEST nvme_e2edp 00:09:15.568 ************************************ 00:09:15.568 14:10:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:15.826 NVMe Write/Read with End-to-End data protection test 00:09:15.826 Attached to 0000:00:06.0 00:09:15.826 Attached to 0000:00:07.0 00:09:15.826 Attached to 0000:00:09.0 00:09:15.826 Attached to 0000:00:08.0 00:09:15.826 Cleaning up... 00:09:15.826 00:09:15.826 real 0m0.209s 00:09:15.826 user 0m0.052s 00:09:15.826 sys 0m0.109s 00:09:15.826 ************************************ 00:09:15.826 END TEST nvme_e2edp 00:09:15.826 ************************************ 00:09:15.826 14:10:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:15.826 14:10:14 -- common/autotest_common.sh@10 -- # set +x 00:09:15.826 14:10:14 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:15.826 14:10:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:15.826 14:10:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:15.826 14:10:14 -- common/autotest_common.sh@10 -- # set +x 00:09:15.826 ************************************ 00:09:15.826 START TEST nvme_reserve 00:09:15.826 ************************************ 00:09:15.826 14:10:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:16.084 ===================================================== 00:09:16.084 NVMe Controller at PCI bus 0, device 6, function 0 00:09:16.084 ===================================================== 00:09:16.084 Reservations: Not Supported 00:09:16.084 ===================================================== 00:09:16.084 NVMe Controller at PCI bus 0, device 7, function 0 00:09:16.084 ===================================================== 00:09:16.084 Reservations: Not Supported 00:09:16.084 ===================================================== 00:09:16.084 NVMe Controller at PCI bus 0, device 9, function 0 00:09:16.084 ===================================================== 00:09:16.084 Reservations: Not Supported 00:09:16.084 ===================================================== 00:09:16.084 NVMe Controller at PCI bus 0, device 8, function 0 00:09:16.084 ===================================================== 00:09:16.084 Reservations: Not Supported 00:09:16.084 Reservation test passed 00:09:16.084 00:09:16.084 real 0m0.195s 00:09:16.084 user 0m0.059s 00:09:16.084 sys 0m0.093s 00:09:16.084 ************************************ 00:09:16.084 END TEST nvme_reserve 00:09:16.084 ************************************ 00:09:16.084 14:10:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:16.084 14:10:14 -- common/autotest_common.sh@10 -- # set +x 00:09:16.084 14:10:14 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:16.084 14:10:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:16.084 14:10:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:16.084 14:10:14 -- common/autotest_common.sh@10 -- # set +x 00:09:16.084 ************************************ 00:09:16.084 START TEST nvme_err_injection 00:09:16.084 ************************************ 00:09:16.084 14:10:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:16.343 NVMe Error Injection test 00:09:16.343 Attached to 0000:00:06.0 00:09:16.343 Attached to 0000:00:07.0 00:09:16.343 Attached to 0000:00:09.0 00:09:16.343 Attached to 0000:00:08.0 00:09:16.343 0000:00:06.0: get features failed as expected 00:09:16.343 0000:00:07.0: get features failed as expected 00:09:16.343 0000:00:09.0: get features failed as expected 00:09:16.343 0000:00:08.0: get features failed as expected 00:09:16.343 0000:00:06.0: get features successfully as expected 00:09:16.343 0000:00:07.0: get features successfully as expected 00:09:16.343 0000:00:09.0: get features successfully as expected 00:09:16.343 0000:00:08.0: get features successfully as expected 00:09:16.343 0000:00:06.0: read failed as expected 00:09:16.343 0000:00:07.0: read failed as expected 00:09:16.343 0000:00:09.0: read failed as expected 00:09:16.343 0000:00:08.0: read failed as expected 00:09:16.343 0000:00:06.0: read successfully as expected 00:09:16.343 0000:00:07.0: read successfully as expected 00:09:16.343 0000:00:09.0: read successfully as expected 00:09:16.343 0000:00:08.0: read successfully as expected 00:09:16.343 Cleaning up... 00:09:16.343 00:09:16.343 real 0m0.266s 00:09:16.343 user 0m0.111s 00:09:16.343 sys 0m0.104s 00:09:16.343 ************************************ 00:09:16.343 END TEST nvme_err_injection 00:09:16.343 ************************************ 00:09:16.343 14:10:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:16.343 14:10:14 -- common/autotest_common.sh@10 -- # set +x 00:09:16.343 14:10:14 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:16.343 14:10:14 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:09:16.343 14:10:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:16.343 14:10:14 -- common/autotest_common.sh@10 -- # set +x 00:09:16.343 ************************************ 00:09:16.343 START TEST nvme_overhead 00:09:16.343 ************************************ 00:09:16.343 14:10:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:17.719 Initializing NVMe Controllers 00:09:17.719 Attached to 0000:00:06.0 00:09:17.719 Attached to 0000:00:07.0 00:09:17.719 Attached to 0000:00:09.0 00:09:17.719 Attached to 0000:00:08.0 00:09:17.719 Initialization complete. Launching workers. 00:09:17.719 submit (in ns) avg, min, max = 12221.1, 10963.1, 62331.5 00:09:17.719 complete (in ns) avg, min, max = 8043.4, 7617.7, 101082.3 00:09:17.719 00:09:17.719 Submit histogram 00:09:17.719 ================ 00:09:17.719 Range in us Cumulative Count 00:09:17.719 10.929 - 10.978: 0.0062% ( 1) 00:09:17.719 11.471 - 11.520: 0.0375% ( 5) 00:09:17.719 11.520 - 11.569: 0.1250% ( 14) 00:09:17.719 11.569 - 11.618: 0.3375% ( 34) 00:09:17.719 11.618 - 11.668: 1.0061% ( 107) 00:09:17.719 11.668 - 11.717: 2.6247% ( 259) 00:09:17.719 11.717 - 11.766: 5.8805% ( 521) 00:09:17.719 11.766 - 11.815: 11.0486% ( 827) 00:09:17.719 11.815 - 11.865: 17.8353% ( 1086) 00:09:17.719 11.865 - 11.914: 26.1967% ( 1338) 00:09:17.719 11.914 - 11.963: 36.0392% ( 1575) 00:09:17.719 11.963 - 12.012: 45.9943% ( 1593) 00:09:17.719 12.012 - 12.062: 54.7994% ( 1409) 00:09:17.719 12.062 - 12.111: 62.8359% ( 1286) 00:09:17.719 12.111 - 12.160: 69.5538% ( 1075) 00:09:17.719 12.160 - 12.209: 75.3031% ( 920) 00:09:17.719 12.209 - 12.258: 79.3963% ( 655) 00:09:17.719 12.258 - 12.308: 82.7834% ( 542) 00:09:17.719 12.308 - 12.357: 85.3518% ( 411) 00:09:17.719 12.357 - 12.406: 87.7078% ( 377) 00:09:17.719 12.406 - 12.455: 89.5576% ( 296) 00:09:17.719 12.455 - 12.505: 91.1511% ( 255) 00:09:17.719 12.505 - 12.554: 92.5384% ( 222) 00:09:17.719 12.554 - 12.603: 93.5321% ( 159) 00:09:17.719 12.603 - 12.702: 95.1319% ( 256) 00:09:17.719 12.702 - 12.800: 96.0630% ( 149) 00:09:17.719 12.800 - 12.898: 96.5817% ( 83) 00:09:17.719 12.898 - 12.997: 96.8754% ( 47) 00:09:17.719 12.997 - 13.095: 97.0379% ( 26) 00:09:17.719 13.095 - 13.194: 97.0879% ( 8) 00:09:17.719 13.194 - 13.292: 97.1254% ( 6) 00:09:17.719 13.292 - 13.391: 97.1566% ( 5) 00:09:17.719 13.391 - 13.489: 97.1691% ( 2) 00:09:17.719 13.489 - 13.588: 97.1754% ( 1) 00:09:17.719 13.588 - 13.686: 97.1879% ( 2) 00:09:17.719 13.686 - 13.785: 97.2253% ( 6) 00:09:17.719 13.785 - 13.883: 97.3066% ( 13) 00:09:17.719 13.883 - 13.982: 97.3753% ( 11) 00:09:17.719 13.982 - 14.080: 97.5503% ( 28) 00:09:17.719 14.080 - 14.178: 97.7065% ( 25) 00:09:17.719 14.178 - 14.277: 97.8128% ( 17) 00:09:17.719 14.277 - 14.375: 97.8878% ( 12) 00:09:17.719 14.375 - 14.474: 97.9503% ( 10) 00:09:17.719 14.474 - 14.572: 98.0190% ( 11) 00:09:17.719 14.572 - 14.671: 98.0752% ( 9) 00:09:17.719 14.671 - 14.769: 98.1190% ( 7) 00:09:17.719 14.769 - 14.868: 98.1502% ( 5) 00:09:17.719 14.868 - 14.966: 98.1815% ( 5) 00:09:17.719 14.966 - 15.065: 98.2002% ( 3) 00:09:17.719 15.065 - 15.163: 98.2065% ( 1) 00:09:17.719 15.163 - 15.262: 98.2127% ( 1) 00:09:17.719 15.262 - 15.360: 98.2315% ( 3) 00:09:17.719 15.360 - 15.458: 98.2377% ( 1) 00:09:17.719 15.557 - 15.655: 98.2690% ( 5) 00:09:17.719 15.655 - 15.754: 98.2815% ( 2) 00:09:17.720 15.754 - 15.852: 98.2940% ( 2) 00:09:17.720 15.852 - 15.951: 98.3065% ( 2) 00:09:17.720 15.951 - 16.049: 98.3127% ( 1) 00:09:17.720 16.049 - 16.148: 98.3502% ( 6) 00:09:17.720 16.148 - 16.246: 98.3940% ( 7) 00:09:17.720 16.246 - 16.345: 98.4127% ( 3) 00:09:17.720 16.345 - 16.443: 98.4252% ( 2) 00:09:17.720 16.443 - 16.542: 98.4439% ( 3) 00:09:17.720 16.542 - 16.640: 98.4502% ( 1) 00:09:17.720 16.640 - 16.738: 98.4689% ( 3) 00:09:17.720 16.837 - 16.935: 98.4877% ( 3) 00:09:17.720 16.935 - 17.034: 98.4939% ( 1) 00:09:17.720 17.034 - 17.132: 98.5002% ( 1) 00:09:17.720 17.231 - 17.329: 98.5127% ( 2) 00:09:17.720 17.329 - 17.428: 98.5189% ( 1) 00:09:17.720 17.428 - 17.526: 98.5627% ( 7) 00:09:17.720 17.526 - 17.625: 98.5939% ( 5) 00:09:17.720 17.625 - 17.723: 98.6814% ( 14) 00:09:17.720 17.723 - 17.822: 98.7689% ( 14) 00:09:17.720 17.822 - 17.920: 98.8751% ( 17) 00:09:17.720 17.920 - 18.018: 98.9876% ( 18) 00:09:17.720 18.018 - 18.117: 99.0626% ( 12) 00:09:17.720 18.117 - 18.215: 99.1126% ( 8) 00:09:17.720 18.215 - 18.314: 99.1751% ( 10) 00:09:17.720 18.314 - 18.412: 99.2501% ( 12) 00:09:17.720 18.412 - 18.511: 99.3063% ( 9) 00:09:17.720 18.511 - 18.609: 99.3876% ( 13) 00:09:17.720 18.609 - 18.708: 99.4688% ( 13) 00:09:17.720 18.708 - 18.806: 99.5188% ( 8) 00:09:17.720 18.806 - 18.905: 99.5813% ( 10) 00:09:17.720 18.905 - 19.003: 99.6125% ( 5) 00:09:17.720 19.003 - 19.102: 99.6813% ( 11) 00:09:17.720 19.102 - 19.200: 99.7063% ( 4) 00:09:17.720 19.200 - 19.298: 99.7500% ( 7) 00:09:17.720 19.298 - 19.397: 99.7625% ( 2) 00:09:17.720 19.397 - 19.495: 99.7688% ( 1) 00:09:17.720 19.495 - 19.594: 99.7875% ( 3) 00:09:17.720 19.594 - 19.692: 99.7938% ( 1) 00:09:17.720 19.692 - 19.791: 99.8000% ( 1) 00:09:17.720 19.791 - 19.889: 99.8125% ( 2) 00:09:17.720 19.889 - 19.988: 99.8188% ( 1) 00:09:17.720 20.185 - 20.283: 99.8250% ( 1) 00:09:17.720 20.382 - 20.480: 99.8313% ( 1) 00:09:17.720 20.677 - 20.775: 99.8375% ( 1) 00:09:17.720 20.775 - 20.874: 99.8438% ( 1) 00:09:17.720 20.874 - 20.972: 99.8500% ( 1) 00:09:17.720 20.972 - 21.071: 99.8563% ( 1) 00:09:17.720 21.169 - 21.268: 99.8688% ( 2) 00:09:17.720 21.563 - 21.662: 99.8750% ( 1) 00:09:17.720 21.760 - 21.858: 99.8875% ( 2) 00:09:17.720 21.858 - 21.957: 99.8938% ( 1) 00:09:17.720 22.646 - 22.745: 99.9000% ( 1) 00:09:17.720 22.745 - 22.843: 99.9063% ( 1) 00:09:17.720 22.942 - 23.040: 99.9125% ( 1) 00:09:17.720 23.138 - 23.237: 99.9188% ( 1) 00:09:17.720 23.335 - 23.434: 99.9250% ( 1) 00:09:17.720 28.554 - 28.751: 99.9313% ( 1) 00:09:17.720 28.751 - 28.948: 99.9375% ( 1) 00:09:17.720 29.145 - 29.342: 99.9438% ( 1) 00:09:17.720 34.855 - 35.052: 99.9500% ( 1) 00:09:17.720 40.763 - 40.960: 99.9563% ( 1) 00:09:17.720 43.717 - 43.914: 99.9625% ( 1) 00:09:17.720 47.065 - 47.262: 99.9688% ( 1) 00:09:17.720 49.428 - 49.625: 99.9750% ( 1) 00:09:17.720 53.563 - 53.957: 99.9813% ( 1) 00:09:17.720 54.351 - 54.745: 99.9875% ( 1) 00:09:17.720 61.834 - 62.228: 99.9938% ( 1) 00:09:17.720 62.228 - 62.622: 100.0000% ( 1) 00:09:17.720 00:09:17.720 Complete histogram 00:09:17.720 ================== 00:09:17.720 Range in us Cumulative Count 00:09:17.720 7.582 - 7.631: 0.0187% ( 3) 00:09:17.720 7.631 - 7.680: 0.3562% ( 54) 00:09:17.720 7.680 - 7.729: 4.0932% ( 598) 00:09:17.720 7.729 - 7.778: 13.6545% ( 1530) 00:09:17.720 7.778 - 7.828: 27.2341% ( 2173) 00:09:17.720 7.828 - 7.877: 42.4634% ( 2437) 00:09:17.720 7.877 - 7.926: 57.9115% ( 2472) 00:09:17.720 7.926 - 7.975: 70.4849% ( 2012) 00:09:17.720 7.975 - 8.025: 79.8338% ( 1496) 00:09:17.720 8.025 - 8.074: 86.1830% ( 1016) 00:09:17.720 8.074 - 8.123: 90.5199% ( 694) 00:09:17.720 8.123 - 8.172: 93.3946% ( 460) 00:09:17.720 8.172 - 8.222: 95.2881% ( 303) 00:09:17.720 8.222 - 8.271: 96.4817% ( 191) 00:09:17.720 8.271 - 8.320: 97.2191% ( 118) 00:09:17.720 8.320 - 8.369: 97.6190% ( 64) 00:09:17.720 8.369 - 8.418: 97.8878% ( 43) 00:09:17.720 8.418 - 8.468: 98.0565% ( 27) 00:09:17.720 8.468 - 8.517: 98.1190% ( 10) 00:09:17.720 8.517 - 8.566: 98.1627% ( 7) 00:09:17.720 8.566 - 8.615: 98.1815% ( 3) 00:09:17.720 8.615 - 8.665: 98.2002% ( 3) 00:09:17.720 8.665 - 8.714: 98.2127% ( 2) 00:09:17.720 8.714 - 8.763: 98.2190% ( 1) 00:09:17.720 8.763 - 8.812: 98.2252% ( 1) 00:09:17.720 8.812 - 8.862: 98.2315% ( 1) 00:09:17.720 8.862 - 8.911: 98.2377% ( 1) 00:09:17.720 9.058 - 9.108: 98.2502% ( 2) 00:09:17.720 9.157 - 9.206: 98.2565% ( 1) 00:09:17.720 9.206 - 9.255: 98.2627% ( 1) 00:09:17.720 9.354 - 9.403: 98.2690% ( 1) 00:09:17.720 9.403 - 9.452: 98.2752% ( 1) 00:09:17.720 9.452 - 9.502: 98.2815% ( 1) 00:09:17.720 9.748 - 9.797: 98.2877% ( 1) 00:09:17.720 9.846 - 9.895: 98.2940% ( 1) 00:09:17.720 9.994 - 10.043: 98.3002% ( 1) 00:09:17.720 10.191 - 10.240: 98.3065% ( 1) 00:09:17.720 10.289 - 10.338: 98.3127% ( 1) 00:09:17.720 10.683 - 10.732: 98.3252% ( 2) 00:09:17.720 10.732 - 10.782: 98.3315% ( 1) 00:09:17.720 10.880 - 10.929: 98.3440% ( 2) 00:09:17.720 11.126 - 11.175: 98.3502% ( 1) 00:09:17.720 11.372 - 11.422: 98.3627% ( 2) 00:09:17.720 11.618 - 11.668: 98.3690% ( 1) 00:09:17.720 11.717 - 11.766: 98.3752% ( 1) 00:09:17.720 12.357 - 12.406: 98.3815% ( 1) 00:09:17.720 12.455 - 12.505: 98.3877% ( 1) 00:09:17.720 12.554 - 12.603: 98.3940% ( 1) 00:09:17.720 12.800 - 12.898: 98.4002% ( 1) 00:09:17.720 13.095 - 13.194: 98.4189% ( 3) 00:09:17.720 13.194 - 13.292: 98.4252% ( 1) 00:09:17.720 13.292 - 13.391: 98.4564% ( 5) 00:09:17.720 13.391 - 13.489: 98.5189% ( 10) 00:09:17.720 13.489 - 13.588: 98.5564% ( 6) 00:09:17.720 13.588 - 13.686: 98.6314% ( 12) 00:09:17.720 13.686 - 13.785: 98.7189% ( 14) 00:09:17.720 13.785 - 13.883: 98.7627% ( 7) 00:09:17.720 13.883 - 13.982: 98.8314% ( 11) 00:09:17.720 13.982 - 14.080: 98.8939% ( 10) 00:09:17.720 14.080 - 14.178: 98.9564% ( 10) 00:09:17.720 14.178 - 14.277: 99.0376% ( 13) 00:09:17.720 14.277 - 14.375: 99.1439% ( 17) 00:09:17.720 14.375 - 14.474: 99.2251% ( 13) 00:09:17.720 14.474 - 14.572: 99.3126% ( 14) 00:09:17.720 14.572 - 14.671: 99.3876% ( 12) 00:09:17.720 14.671 - 14.769: 99.4376% ( 8) 00:09:17.720 14.769 - 14.868: 99.5438% ( 17) 00:09:17.720 14.868 - 14.966: 99.5876% ( 7) 00:09:17.720 14.966 - 15.065: 99.6438% ( 9) 00:09:17.720 15.065 - 15.163: 99.6813% ( 6) 00:09:17.720 15.163 - 15.262: 99.7063% ( 4) 00:09:17.720 15.262 - 15.360: 99.7500% ( 7) 00:09:17.720 15.360 - 15.458: 99.7563% ( 1) 00:09:17.720 15.458 - 15.557: 99.7625% ( 1) 00:09:17.720 15.557 - 15.655: 99.7813% ( 3) 00:09:17.720 15.754 - 15.852: 99.8000% ( 3) 00:09:17.720 15.852 - 15.951: 99.8063% ( 1) 00:09:17.720 15.951 - 16.049: 99.8125% ( 1) 00:09:17.720 16.345 - 16.443: 99.8250% ( 2) 00:09:17.720 16.542 - 16.640: 99.8375% ( 2) 00:09:17.720 16.935 - 17.034: 99.8438% ( 1) 00:09:17.720 17.034 - 17.132: 99.8563% ( 2) 00:09:17.720 17.329 - 17.428: 99.8625% ( 1) 00:09:17.720 17.822 - 17.920: 99.8688% ( 1) 00:09:17.720 18.117 - 18.215: 99.8813% ( 2) 00:09:17.720 18.314 - 18.412: 99.8938% ( 2) 00:09:17.720 18.609 - 18.708: 99.9063% ( 2) 00:09:17.720 18.806 - 18.905: 99.9250% ( 3) 00:09:17.720 19.003 - 19.102: 99.9313% ( 1) 00:09:17.720 19.397 - 19.495: 99.9375% ( 1) 00:09:17.720 20.775 - 20.874: 99.9500% ( 2) 00:09:17.720 20.874 - 20.972: 99.9563% ( 1) 00:09:17.720 23.138 - 23.237: 99.9625% ( 1) 00:09:17.720 27.569 - 27.766: 99.9688% ( 1) 00:09:17.720 29.538 - 29.735: 99.9750% ( 1) 00:09:17.720 38.006 - 38.203: 99.9813% ( 1) 00:09:17.720 51.594 - 51.988: 99.9875% ( 1) 00:09:17.720 51.988 - 52.382: 99.9938% ( 1) 00:09:17.720 100.825 - 101.612: 100.0000% ( 1) 00:09:17.720 00:09:17.720 ************************************ 00:09:17.720 END TEST nvme_overhead 00:09:17.720 ************************************ 00:09:17.720 00:09:17.720 real 0m1.220s 00:09:17.720 user 0m1.070s 00:09:17.720 sys 0m0.101s 00:09:17.720 14:10:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:17.720 14:10:16 -- common/autotest_common.sh@10 -- # set +x 00:09:17.720 14:10:16 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:17.720 14:10:16 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:09:17.720 14:10:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:17.720 14:10:16 -- common/autotest_common.sh@10 -- # set +x 00:09:17.720 ************************************ 00:09:17.720 START TEST nvme_arbitration 00:09:17.720 ************************************ 00:09:17.720 14:10:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:21.017 Initializing NVMe Controllers 00:09:21.017 Attached to 0000:00:06.0 00:09:21.017 Attached to 0000:00:07.0 00:09:21.017 Attached to 0000:00:09.0 00:09:21.017 Attached to 0000:00:08.0 00:09:21.017 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:09:21.017 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:09:21.017 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:09:21.017 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:21.017 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:21.017 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:21.017 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:21.017 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:21.017 Initialization complete. Launching workers. 00:09:21.017 Starting thread on core 1 with urgent priority queue 00:09:21.017 Starting thread on core 2 with urgent priority queue 00:09:21.017 Starting thread on core 3 with urgent priority queue 00:09:21.017 Starting thread on core 0 with urgent priority queue 00:09:21.017 QEMU NVMe Ctrl (12340 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:09:21.017 QEMU NVMe Ctrl (12342 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:09:21.017 QEMU NVMe Ctrl (12341 ) core 1: 874.67 IO/s 114.33 secs/100000 ios 00:09:21.017 QEMU NVMe Ctrl (12342 ) core 1: 874.67 IO/s 114.33 secs/100000 ios 00:09:21.017 QEMU NVMe Ctrl (12343 ) core 2: 917.33 IO/s 109.01 secs/100000 ios 00:09:21.017 QEMU NVMe Ctrl (12342 ) core 3: 789.33 IO/s 126.69 secs/100000 ios 00:09:21.017 ======================================================== 00:09:21.017 00:09:21.017 ************************************ 00:09:21.017 END TEST nvme_arbitration 00:09:21.017 ************************************ 00:09:21.017 00:09:21.017 real 0m3.414s 00:09:21.017 user 0m9.531s 00:09:21.017 sys 0m0.118s 00:09:21.017 14:10:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:21.017 14:10:19 -- common/autotest_common.sh@10 -- # set +x 00:09:21.017 14:10:19 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:09:21.017 14:10:19 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:21.017 14:10:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:21.017 14:10:19 -- common/autotest_common.sh@10 -- # set +x 00:09:21.017 ************************************ 00:09:21.017 START TEST nvme_single_aen 00:09:21.017 ************************************ 00:09:21.017 14:10:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:09:21.017 [2024-11-19 14:10:19.552483] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:21.017 [2024-11-19 14:10:19.552541] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:21.277 [2024-11-19 14:10:19.679598] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:09:21.277 [2024-11-19 14:10:19.681691] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:09:21.277 [2024-11-19 14:10:19.683452] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:09:21.277 [2024-11-19 14:10:19.685176] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:21.277 Asynchronous Event Request test 00:09:21.277 Attached to 0000:00:06.0 00:09:21.277 Attached to 0000:00:07.0 00:09:21.277 Attached to 0000:00:09.0 00:09:21.277 Attached to 0000:00:08.0 00:09:21.277 Reset controller to setup AER completions for this process 00:09:21.277 Registering asynchronous event callbacks... 00:09:21.277 Getting orig temperature thresholds of all controllers 00:09:21.277 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:21.277 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:21.277 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:21.277 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:21.277 Setting all controllers temperature threshold low to trigger AER 00:09:21.277 Waiting for all controllers temperature threshold to be set lower 00:09:21.277 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:21.277 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:09:21.277 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:21.277 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:09:21.277 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:21.277 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:09:21.277 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:21.277 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:09:21.277 Waiting for all controllers to trigger AER and reset threshold 00:09:21.277 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:21.277 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:21.277 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:21.277 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:21.277 Cleaning up... 00:09:21.277 00:09:21.277 real 0m0.201s 00:09:21.277 user 0m0.066s 00:09:21.277 sys 0m0.098s 00:09:21.277 14:10:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:21.277 14:10:19 -- common/autotest_common.sh@10 -- # set +x 00:09:21.277 ************************************ 00:09:21.277 END TEST nvme_single_aen 00:09:21.277 ************************************ 00:09:21.277 14:10:19 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:21.277 14:10:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:21.277 14:10:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:21.277 14:10:19 -- common/autotest_common.sh@10 -- # set +x 00:09:21.277 ************************************ 00:09:21.277 START TEST nvme_doorbell_aers 00:09:21.277 ************************************ 00:09:21.277 14:10:19 -- common/autotest_common.sh@1114 -- # nvme_doorbell_aers 00:09:21.277 14:10:19 -- nvme/nvme.sh@70 -- # bdfs=() 00:09:21.277 14:10:19 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:21.277 14:10:19 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:21.277 14:10:19 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:21.277 14:10:19 -- common/autotest_common.sh@1508 -- # bdfs=() 00:09:21.277 14:10:19 -- common/autotest_common.sh@1508 -- # local bdfs 00:09:21.277 14:10:19 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:21.277 14:10:19 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:21.277 14:10:19 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:09:21.277 14:10:19 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:09:21.277 14:10:19 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:09:21.277 14:10:19 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:21.277 14:10:19 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:09:21.535 [2024-11-19 14:10:20.004689] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63908) is not found. Dropping the request. 00:09:31.546 Executing: test_write_invalid_db 00:09:31.546 Waiting for AER completion... 00:09:31.546 Failure: test_write_invalid_db 00:09:31.546 00:09:31.546 Executing: test_invalid_db_write_overflow_sq 00:09:31.546 Waiting for AER completion... 00:09:31.546 Failure: test_invalid_db_write_overflow_sq 00:09:31.546 00:09:31.546 Executing: test_invalid_db_write_overflow_cq 00:09:31.546 Waiting for AER completion... 00:09:31.546 Failure: test_invalid_db_write_overflow_cq 00:09:31.546 00:09:31.546 14:10:29 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:31.546 14:10:29 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:07.0' 00:09:31.546 [2024-11-19 14:10:30.056058] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63908) is not found. Dropping the request. 00:09:41.513 Executing: test_write_invalid_db 00:09:41.513 Waiting for AER completion... 00:09:41.513 Failure: test_write_invalid_db 00:09:41.513 00:09:41.513 Executing: test_invalid_db_write_overflow_sq 00:09:41.513 Waiting for AER completion... 00:09:41.513 Failure: test_invalid_db_write_overflow_sq 00:09:41.513 00:09:41.513 Executing: test_invalid_db_write_overflow_cq 00:09:41.513 Waiting for AER completion... 00:09:41.513 Failure: test_invalid_db_write_overflow_cq 00:09:41.513 00:09:41.513 14:10:39 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:41.513 14:10:39 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:08.0' 00:09:41.771 [2024-11-19 14:10:40.077179] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63908) is not found. Dropping the request. 00:09:51.736 Executing: test_write_invalid_db 00:09:51.736 Waiting for AER completion... 00:09:51.736 Failure: test_write_invalid_db 00:09:51.736 00:09:51.736 Executing: test_invalid_db_write_overflow_sq 00:09:51.736 Waiting for AER completion... 00:09:51.736 Failure: test_invalid_db_write_overflow_sq 00:09:51.736 00:09:51.736 Executing: test_invalid_db_write_overflow_cq 00:09:51.736 Waiting for AER completion... 00:09:51.736 Failure: test_invalid_db_write_overflow_cq 00:09:51.736 00:09:51.736 14:10:49 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:51.736 14:10:49 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:09.0' 00:09:51.736 [2024-11-19 14:10:50.142554] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63908) is not found. Dropping the request. 00:10:01.703 Executing: test_write_invalid_db 00:10:01.703 Waiting for AER completion... 00:10:01.703 Failure: test_write_invalid_db 00:10:01.703 00:10:01.703 Executing: test_invalid_db_write_overflow_sq 00:10:01.703 Waiting for AER completion... 00:10:01.703 Failure: test_invalid_db_write_overflow_sq 00:10:01.703 00:10:01.703 Executing: test_invalid_db_write_overflow_cq 00:10:01.703 Waiting for AER completion... 00:10:01.703 Failure: test_invalid_db_write_overflow_cq 00:10:01.703 00:10:01.703 00:10:01.703 real 0m40.199s 00:10:01.703 user 0m34.133s 00:10:01.703 sys 0m5.669s 00:10:01.703 14:10:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:01.703 14:10:59 -- common/autotest_common.sh@10 -- # set +x 00:10:01.703 ************************************ 00:10:01.703 END TEST nvme_doorbell_aers 00:10:01.703 ************************************ 00:10:01.703 14:10:59 -- nvme/nvme.sh@97 -- # uname 00:10:01.703 14:10:59 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:01.703 14:10:59 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:10:01.703 14:10:59 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:10:01.703 14:10:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:01.703 14:10:59 -- common/autotest_common.sh@10 -- # set +x 00:10:01.703 ************************************ 00:10:01.703 START TEST nvme_multi_aen 00:10:01.703 ************************************ 00:10:01.704 14:11:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:10:01.704 [2024-11-19 14:11:00.042770] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:01.704 [2024-11-19 14:11:00.043167] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.704 [2024-11-19 14:11:00.176734] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:01.704 [2024-11-19 14:11:00.176896] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63908) is not found. Dropping the request. 00:10:01.704 [2024-11-19 14:11:00.177221] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63908) is not found. Dropping the request. 00:10:01.704 [2024-11-19 14:11:00.177310] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63908) is not found. Dropping the request. 00:10:01.704 [2024-11-19 14:11:00.178840] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:10:01.704 [2024-11-19 14:11:00.178946] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63908) is not found. Dropping the request. 00:10:01.704 [2024-11-19 14:11:00.179015] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63908) is not found. Dropping the request. 00:10:01.704 [2024-11-19 14:11:00.179056] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63908) is not found. Dropping the request. 00:10:01.704 [2024-11-19 14:11:00.179939] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:10:01.704 [2024-11-19 14:11:00.180012] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63908) is not found. Dropping the request. 00:10:01.704 [2024-11-19 14:11:00.180072] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63908) is not found. Dropping the request. 00:10:01.704 [2024-11-19 14:11:00.180113] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63908) is not found. Dropping the request. 00:10:01.704 [2024-11-19 14:11:00.181222] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:10:01.704 [2024-11-19 14:11:00.181294] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63908) is not found. Dropping the request. 00:10:01.704 [2024-11-19 14:11:00.181351] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63908) is not found. Dropping the request. 00:10:01.704 [2024-11-19 14:11:00.181394] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63908) is not found. Dropping the request. 00:10:01.704 [2024-11-19 14:11:00.189716] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:01.704 [2024-11-19 14:11:00.189980] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 Child process pid: 64430 00:10:01.704 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.962 [Child] Asynchronous Event Request test 00:10:01.962 [Child] Attached to 0000:00:06.0 00:10:01.962 [Child] Attached to 0000:00:07.0 00:10:01.962 [Child] Attached to 0000:00:09.0 00:10:01.962 [Child] Attached to 0000:00:08.0 00:10:01.962 [Child] Registering asynchronous event callbacks... 00:10:01.962 [Child] Getting orig temperature thresholds of all controllers 00:10:01.962 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:01.962 [Child] 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:01.962 [Child] 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:01.962 [Child] 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:01.962 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:01.962 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:01.962 [Child] 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:01.962 [Child] 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:01.962 [Child] 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:01.962 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:01.962 [Child] 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:01.962 [Child] 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:01.962 [Child] 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:01.962 [Child] Cleaning up... 00:10:01.962 Asynchronous Event Request test 00:10:01.962 Attached to 0000:00:06.0 00:10:01.962 Attached to 0000:00:07.0 00:10:01.962 Attached to 0000:00:09.0 00:10:01.962 Attached to 0000:00:08.0 00:10:01.962 Reset controller to setup AER completions for this process 00:10:01.962 Registering asynchronous event callbacks... 00:10:01.962 Getting orig temperature thresholds of all controllers 00:10:01.962 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:01.962 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:01.962 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:01.962 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:01.962 Setting all controllers temperature threshold low to trigger AER 00:10:01.962 Waiting for all controllers temperature threshold to be set lower 00:10:01.962 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:01.962 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:10:01.962 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:01.962 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:10:01.962 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:01.962 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:10:01.962 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:01.962 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:10:01.962 Waiting for all controllers to trigger AER and reset threshold 00:10:01.962 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:01.962 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:01.962 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:01.962 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:01.962 Cleaning up... 00:10:01.962 00:10:01.962 real 0m0.406s 00:10:01.962 user 0m0.104s 00:10:01.962 sys 0m0.199s 00:10:01.962 ************************************ 00:10:01.962 END TEST nvme_multi_aen 00:10:01.962 ************************************ 00:10:01.962 14:11:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:01.962 14:11:00 -- common/autotest_common.sh@10 -- # set +x 00:10:01.962 14:11:00 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:01.962 14:11:00 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:01.962 14:11:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:01.962 14:11:00 -- common/autotest_common.sh@10 -- # set +x 00:10:01.962 ************************************ 00:10:01.962 START TEST nvme_startup 00:10:01.962 ************************************ 00:10:01.962 14:11:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:02.306 Initializing NVMe Controllers 00:10:02.306 Attached to 0000:00:06.0 00:10:02.306 Attached to 0000:00:07.0 00:10:02.306 Attached to 0000:00:09.0 00:10:02.306 Attached to 0000:00:08.0 00:10:02.306 Initialization complete. 00:10:02.306 Time used:135600.469 (us). 00:10:02.306 00:10:02.306 real 0m0.192s 00:10:02.306 user 0m0.050s 00:10:02.306 sys 0m0.097s 00:10:02.306 ************************************ 00:10:02.306 END TEST nvme_startup 00:10:02.306 ************************************ 00:10:02.306 14:11:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:02.306 14:11:00 -- common/autotest_common.sh@10 -- # set +x 00:10:02.306 14:11:00 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:02.306 14:11:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:02.306 14:11:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:02.306 14:11:00 -- common/autotest_common.sh@10 -- # set +x 00:10:02.306 ************************************ 00:10:02.306 START TEST nvme_multi_secondary 00:10:02.306 ************************************ 00:10:02.306 14:11:00 -- common/autotest_common.sh@1114 -- # nvme_multi_secondary 00:10:02.306 14:11:00 -- nvme/nvme.sh@52 -- # pid0=64486 00:10:02.306 14:11:00 -- nvme/nvme.sh@54 -- # pid1=64487 00:10:02.306 14:11:00 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:10:02.306 14:11:00 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:02.306 14:11:00 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:05.623 Initializing NVMe Controllers 00:10:05.623 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:05.623 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:05.623 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:05.623 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:05.623 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:10:05.623 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:10:05.623 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:10:05.623 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:10:05.623 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:10:05.623 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:10:05.623 Initialization complete. Launching workers. 00:10:05.623 ======================================================== 00:10:05.623 Latency(us) 00:10:05.623 Device Information : IOPS MiB/s Average min max 00:10:05.623 PCIE (0000:00:06.0) NSID 1 from core 2: 2709.51 10.58 5903.85 995.10 14010.91 00:10:05.623 PCIE (0000:00:07.0) NSID 1 from core 2: 2709.51 10.58 5904.99 1014.83 13854.98 00:10:05.623 PCIE (0000:00:09.0) NSID 1 from core 2: 2709.51 10.58 5904.73 1013.09 14594.98 00:10:05.623 PCIE (0000:00:08.0) NSID 1 from core 2: 2709.51 10.58 5905.25 1063.40 13930.86 00:10:05.623 PCIE (0000:00:08.0) NSID 2 from core 2: 2709.51 10.58 5904.99 1024.81 14301.56 00:10:05.623 PCIE (0000:00:08.0) NSID 3 from core 2: 2709.51 10.58 5905.44 1024.88 14120.99 00:10:05.623 ======================================================== 00:10:05.623 Total : 16257.07 63.50 5904.87 995.10 14594.98 00:10:05.623 00:10:05.623 Initializing NVMe Controllers 00:10:05.623 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:05.623 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:05.623 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:05.623 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:05.623 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:10:05.623 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:10:05.623 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:10:05.623 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:10:05.623 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:10:05.623 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:10:05.623 Initialization complete. Launching workers. 00:10:05.623 ======================================================== 00:10:05.623 Latency(us) 00:10:05.623 Device Information : IOPS MiB/s Average min max 00:10:05.623 PCIE (0000:00:06.0) NSID 1 from core 1: 7065.88 27.60 2266.49 852.96 6787.83 00:10:05.623 PCIE (0000:00:07.0) NSID 1 from core 1: 7065.88 27.60 2267.69 870.46 6711.30 00:10:05.623 PCIE (0000:00:09.0) NSID 1 from core 1: 7065.88 27.60 2267.65 854.62 6502.87 00:10:05.623 PCIE (0000:00:08.0) NSID 1 from core 1: 7065.88 27.60 2267.63 866.34 6911.98 00:10:05.623 PCIE (0000:00:08.0) NSID 2 from core 1: 7065.88 27.60 2267.61 855.73 7055.85 00:10:05.623 PCIE (0000:00:08.0) NSID 3 from core 1: 7065.88 27.60 2267.57 862.92 6479.17 00:10:05.623 ======================================================== 00:10:05.623 Total : 42395.25 165.61 2267.44 852.96 7055.85 00:10:05.623 00:10:05.623 14:11:04 -- nvme/nvme.sh@56 -- # wait 64486 00:10:08.151 Initializing NVMe Controllers 00:10:08.151 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:08.151 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:08.151 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:08.151 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:08.151 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:10:08.151 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:10:08.151 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:10:08.151 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:10:08.152 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:10:08.152 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:10:08.152 Initialization complete. Launching workers. 00:10:08.152 ======================================================== 00:10:08.152 Latency(us) 00:10:08.152 Device Information : IOPS MiB/s Average min max 00:10:08.152 PCIE (0000:00:06.0) NSID 1 from core 0: 10261.23 40.08 1558.11 742.80 8764.10 00:10:08.152 PCIE (0000:00:07.0) NSID 1 from core 0: 10261.23 40.08 1558.89 767.82 8284.07 00:10:08.152 PCIE (0000:00:09.0) NSID 1 from core 0: 10261.23 40.08 1558.87 685.03 6895.45 00:10:08.152 PCIE (0000:00:08.0) NSID 1 from core 0: 10261.23 40.08 1558.85 672.58 9262.91 00:10:08.152 PCIE (0000:00:08.0) NSID 2 from core 0: 10261.23 40.08 1558.83 649.32 9152.20 00:10:08.152 PCIE (0000:00:08.0) NSID 3 from core 0: 10261.23 40.08 1558.82 622.87 9016.64 00:10:08.152 ======================================================== 00:10:08.152 Total : 61567.36 240.50 1558.73 622.87 9262.91 00:10:08.152 00:10:08.152 14:11:06 -- nvme/nvme.sh@57 -- # wait 64487 00:10:08.152 14:11:06 -- nvme/nvme.sh@61 -- # pid0=64556 00:10:08.152 14:11:06 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:08.152 14:11:06 -- nvme/nvme.sh@63 -- # pid1=64557 00:10:08.152 14:11:06 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:10:08.152 14:11:06 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:11.435 Initializing NVMe Controllers 00:10:11.435 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:11.435 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:11.435 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:11.435 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:11.435 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:10:11.435 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:10:11.435 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:10:11.435 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:10:11.435 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:10:11.435 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:10:11.435 Initialization complete. Launching workers. 00:10:11.435 ======================================================== 00:10:11.435 Latency(us) 00:10:11.435 Device Information : IOPS MiB/s Average min max 00:10:11.435 PCIE (0000:00:06.0) NSID 1 from core 0: 7010.30 27.38 2281.04 793.38 7627.80 00:10:11.435 PCIE (0000:00:07.0) NSID 1 from core 0: 7010.30 27.38 2281.94 795.09 7139.09 00:10:11.435 PCIE (0000:00:09.0) NSID 1 from core 0: 7010.30 27.38 2281.93 795.34 7378.63 00:10:11.435 PCIE (0000:00:08.0) NSID 1 from core 0: 7010.30 27.38 2281.90 806.25 7086.32 00:10:11.435 PCIE (0000:00:08.0) NSID 2 from core 0: 7010.30 27.38 2281.86 811.76 7656.24 00:10:11.435 PCIE (0000:00:08.0) NSID 3 from core 0: 7010.30 27.38 2281.97 789.57 7603.96 00:10:11.435 ======================================================== 00:10:11.435 Total : 42061.77 164.30 2281.77 789.57 7656.24 00:10:11.435 00:10:11.435 Initializing NVMe Controllers 00:10:11.435 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:11.435 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:11.435 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:11.435 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:11.435 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:10:11.435 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:10:11.435 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:10:11.435 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:10:11.435 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:10:11.435 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:10:11.435 Initialization complete. Launching workers. 00:10:11.435 ======================================================== 00:10:11.435 Latency(us) 00:10:11.435 Device Information : IOPS MiB/s Average min max 00:10:11.435 PCIE (0000:00:06.0) NSID 1 from core 1: 7532.92 29.43 2122.69 775.33 6942.18 00:10:11.435 PCIE (0000:00:07.0) NSID 1 from core 1: 7532.92 29.43 2123.66 803.65 6922.64 00:10:11.435 PCIE (0000:00:09.0) NSID 1 from core 1: 7532.92 29.43 2123.62 801.54 6226.71 00:10:11.435 PCIE (0000:00:08.0) NSID 1 from core 1: 7532.92 29.43 2123.59 799.06 6391.89 00:10:11.436 PCIE (0000:00:08.0) NSID 2 from core 1: 7532.92 29.43 2123.54 787.39 6350.89 00:10:11.436 PCIE (0000:00:08.0) NSID 3 from core 1: 7532.92 29.43 2123.50 792.17 6663.45 00:10:11.436 ======================================================== 00:10:11.436 Total : 45197.51 176.55 2123.43 775.33 6942.18 00:10:11.436 00:10:13.338 Initializing NVMe Controllers 00:10:13.338 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:13.338 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:13.338 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:13.338 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:13.338 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:10:13.338 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:10:13.338 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:10:13.338 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:10:13.338 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:10:13.338 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:10:13.338 Initialization complete. Launching workers. 00:10:13.338 ======================================================== 00:10:13.338 Latency(us) 00:10:13.338 Device Information : IOPS MiB/s Average min max 00:10:13.338 PCIE (0000:00:06.0) NSID 1 from core 2: 4362.57 17.04 3665.33 764.13 13279.94 00:10:13.338 PCIE (0000:00:07.0) NSID 1 from core 2: 4362.57 17.04 3666.97 759.06 16651.85 00:10:13.338 PCIE (0000:00:09.0) NSID 1 from core 2: 4362.57 17.04 3666.73 737.65 16618.43 00:10:13.338 PCIE (0000:00:08.0) NSID 1 from core 2: 4362.57 17.04 3666.68 684.94 13021.41 00:10:13.338 PCIE (0000:00:08.0) NSID 2 from core 2: 4362.57 17.04 3666.82 640.53 12707.23 00:10:13.338 PCIE (0000:00:08.0) NSID 3 from core 2: 4362.57 17.04 3666.77 605.13 13725.49 00:10:13.338 ======================================================== 00:10:13.338 Total : 26175.41 102.25 3666.55 605.13 16651.85 00:10:13.338 00:10:13.338 ************************************ 00:10:13.338 END TEST nvme_multi_secondary 00:10:13.338 ************************************ 00:10:13.338 14:11:11 -- nvme/nvme.sh@65 -- # wait 64556 00:10:13.338 14:11:11 -- nvme/nvme.sh@66 -- # wait 64557 00:10:13.338 00:10:13.338 real 0m10.792s 00:10:13.338 user 0m18.645s 00:10:13.338 sys 0m0.668s 00:10:13.338 14:11:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:13.338 14:11:11 -- common/autotest_common.sh@10 -- # set +x 00:10:13.338 14:11:11 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:10:13.338 14:11:11 -- nvme/nvme.sh@102 -- # kill_stub 00:10:13.338 14:11:11 -- common/autotest_common.sh@1075 -- # [[ -e /proc/63503 ]] 00:10:13.338 14:11:11 -- common/autotest_common.sh@1076 -- # kill 63503 00:10:13.338 14:11:11 -- common/autotest_common.sh@1077 -- # wait 63503 00:10:13.910 [2024-11-19 14:11:12.193100] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64429) is not found. Dropping the request. 00:10:13.910 [2024-11-19 14:11:12.193149] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64429) is not found. Dropping the request. 00:10:13.910 [2024-11-19 14:11:12.193160] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64429) is not found. Dropping the request. 00:10:13.911 [2024-11-19 14:11:12.193170] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64429) is not found. Dropping the request. 00:10:14.170 [2024-11-19 14:11:12.713422] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64429) is not found. Dropping the request. 00:10:14.170 [2024-11-19 14:11:12.713468] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64429) is not found. Dropping the request. 00:10:14.170 [2024-11-19 14:11:12.713479] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64429) is not found. Dropping the request. 00:10:14.170 [2024-11-19 14:11:12.713490] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64429) is not found. Dropping the request. 00:10:14.741 [2024-11-19 14:11:13.232296] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64429) is not found. Dropping the request. 00:10:14.741 [2024-11-19 14:11:13.232344] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64429) is not found. Dropping the request. 00:10:14.741 [2024-11-19 14:11:13.232356] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64429) is not found. Dropping the request. 00:10:14.741 [2024-11-19 14:11:13.232366] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64429) is not found. Dropping the request. 00:10:16.657 [2024-11-19 14:11:14.729774] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64429) is not found. Dropping the request. 00:10:16.657 [2024-11-19 14:11:14.729828] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64429) is not found. Dropping the request. 00:10:16.657 [2024-11-19 14:11:14.729840] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64429) is not found. Dropping the request. 00:10:16.657 [2024-11-19 14:11:14.729853] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64429) is not found. Dropping the request. 00:10:16.657 14:11:14 -- common/autotest_common.sh@1079 -- # rm -f /var/run/spdk_stub0 00:10:16.657 14:11:14 -- common/autotest_common.sh@1083 -- # echo 2 00:10:16.657 14:11:14 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:16.657 14:11:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:16.657 14:11:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:16.657 14:11:14 -- common/autotest_common.sh@10 -- # set +x 00:10:16.657 ************************************ 00:10:16.657 START TEST bdev_nvme_reset_stuck_adm_cmd 00:10:16.657 ************************************ 00:10:16.657 14:11:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:16.657 * Looking for test storage... 00:10:16.657 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:16.657 14:11:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:16.657 14:11:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:16.657 14:11:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:16.657 14:11:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:16.657 14:11:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:16.657 14:11:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:16.657 14:11:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:16.657 14:11:15 -- scripts/common.sh@335 -- # IFS=.-: 00:10:16.657 14:11:15 -- scripts/common.sh@335 -- # read -ra ver1 00:10:16.657 14:11:15 -- scripts/common.sh@336 -- # IFS=.-: 00:10:16.657 14:11:15 -- scripts/common.sh@336 -- # read -ra ver2 00:10:16.657 14:11:15 -- scripts/common.sh@337 -- # local 'op=<' 00:10:16.657 14:11:15 -- scripts/common.sh@339 -- # ver1_l=2 00:10:16.657 14:11:15 -- scripts/common.sh@340 -- # ver2_l=1 00:10:16.657 14:11:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:16.657 14:11:15 -- scripts/common.sh@343 -- # case "$op" in 00:10:16.657 14:11:15 -- scripts/common.sh@344 -- # : 1 00:10:16.657 14:11:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:16.657 14:11:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:16.657 14:11:15 -- scripts/common.sh@364 -- # decimal 1 00:10:16.657 14:11:15 -- scripts/common.sh@352 -- # local d=1 00:10:16.657 14:11:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:16.657 14:11:15 -- scripts/common.sh@354 -- # echo 1 00:10:16.657 14:11:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:16.657 14:11:15 -- scripts/common.sh@365 -- # decimal 2 00:10:16.657 14:11:15 -- scripts/common.sh@352 -- # local d=2 00:10:16.657 14:11:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:16.657 14:11:15 -- scripts/common.sh@354 -- # echo 2 00:10:16.657 14:11:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:16.657 14:11:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:16.657 14:11:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:16.657 14:11:15 -- scripts/common.sh@367 -- # return 0 00:10:16.657 14:11:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:16.657 14:11:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:16.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.657 --rc genhtml_branch_coverage=1 00:10:16.657 --rc genhtml_function_coverage=1 00:10:16.657 --rc genhtml_legend=1 00:10:16.657 --rc geninfo_all_blocks=1 00:10:16.657 --rc geninfo_unexecuted_blocks=1 00:10:16.657 00:10:16.657 ' 00:10:16.657 14:11:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:16.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.657 --rc genhtml_branch_coverage=1 00:10:16.657 --rc genhtml_function_coverage=1 00:10:16.657 --rc genhtml_legend=1 00:10:16.657 --rc geninfo_all_blocks=1 00:10:16.657 --rc geninfo_unexecuted_blocks=1 00:10:16.657 00:10:16.657 ' 00:10:16.657 14:11:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:16.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.657 --rc genhtml_branch_coverage=1 00:10:16.657 --rc genhtml_function_coverage=1 00:10:16.657 --rc genhtml_legend=1 00:10:16.657 --rc geninfo_all_blocks=1 00:10:16.657 --rc geninfo_unexecuted_blocks=1 00:10:16.657 00:10:16.657 ' 00:10:16.657 14:11:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:16.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.657 --rc genhtml_branch_coverage=1 00:10:16.657 --rc genhtml_function_coverage=1 00:10:16.657 --rc genhtml_legend=1 00:10:16.657 --rc geninfo_all_blocks=1 00:10:16.657 --rc geninfo_unexecuted_blocks=1 00:10:16.657 00:10:16.657 ' 00:10:16.657 14:11:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:16.657 14:11:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:16.657 14:11:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:16.657 14:11:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:16.657 14:11:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:16.657 14:11:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:16.657 14:11:15 -- common/autotest_common.sh@1519 -- # bdfs=() 00:10:16.657 14:11:15 -- common/autotest_common.sh@1519 -- # local bdfs 00:10:16.657 14:11:15 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:10:16.657 14:11:15 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:10:16.657 14:11:15 -- common/autotest_common.sh@1508 -- # bdfs=() 00:10:16.657 14:11:15 -- common/autotest_common.sh@1508 -- # local bdfs 00:10:16.657 14:11:15 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:16.657 14:11:15 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:16.657 14:11:15 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:10:16.657 14:11:15 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:10:16.657 14:11:15 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:16.657 14:11:15 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:10:16.657 14:11:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:10:16.657 14:11:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:10:16.657 14:11:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64748 00:10:16.657 14:11:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:16.657 14:11:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64748 00:10:16.657 14:11:15 -- common/autotest_common.sh@829 -- # '[' -z 64748 ']' 00:10:16.657 14:11:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.657 14:11:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:16.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.657 14:11:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.657 14:11:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:16.657 14:11:15 -- common/autotest_common.sh@10 -- # set +x 00:10:16.657 14:11:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:16.657 [2024-11-19 14:11:15.183553] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:16.657 [2024-11-19 14:11:15.183660] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64748 ] 00:10:16.919 [2024-11-19 14:11:15.341221] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:17.178 [2024-11-19 14:11:15.517226] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:17.178 [2024-11-19 14:11:15.517517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.178 [2024-11-19 14:11:15.517774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:17.178 [2024-11-19 14:11:15.518043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.178 [2024-11-19 14:11:15.518064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:18.113 14:11:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:18.113 14:11:16 -- common/autotest_common.sh@862 -- # return 0 00:10:18.113 14:11:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:10:18.113 14:11:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.113 14:11:16 -- common/autotest_common.sh@10 -- # set +x 00:10:18.372 nvme0n1 00:10:18.372 14:11:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.372 14:11:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:10:18.372 14:11:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_rnOSX.txt 00:10:18.372 14:11:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:10:18.372 14:11:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.372 14:11:16 -- common/autotest_common.sh@10 -- # set +x 00:10:18.372 true 00:10:18.372 14:11:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.372 14:11:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:10:18.372 14:11:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732025476 00:10:18.372 14:11:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64778 00:10:18.372 14:11:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:18.372 14:11:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:10:18.372 14:11:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:20.272 14:11:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:10:20.272 14:11:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.272 14:11:18 -- common/autotest_common.sh@10 -- # set +x 00:10:20.272 [2024-11-19 14:11:18.751971] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:20.272 [2024-11-19 14:11:18.752392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:10:20.272 [2024-11-19 14:11:18.752427] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:20.272 [2024-11-19 14:11:18.752438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.272 [2024-11-19 14:11:18.754045] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:20.272 14:11:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.272 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64778 00:10:20.272 14:11:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64778 00:10:20.272 14:11:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64778 00:10:20.272 14:11:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:10:20.272 14:11:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:10:20.272 14:11:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:10:20.272 14:11:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.272 14:11:18 -- common/autotest_common.sh@10 -- # set +x 00:10:20.272 14:11:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.272 14:11:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:10:20.272 14:11:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_rnOSX.txt 00:10:20.272 14:11:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:10:20.272 14:11:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:10:20.272 14:11:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:20.272 14:11:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:20.272 14:11:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:20.272 14:11:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:20.272 14:11:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:20.272 14:11:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:20.272 14:11:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:10:20.272 14:11:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:10:20.272 14:11:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:10:20.272 14:11:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:20.272 14:11:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:20.530 14:11:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:20.530 14:11:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:20.530 14:11:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:20.530 14:11:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:20.530 14:11:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:10:20.530 14:11:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:10:20.530 14:11:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_rnOSX.txt 00:10:20.530 14:11:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64748 00:10:20.530 14:11:18 -- common/autotest_common.sh@936 -- # '[' -z 64748 ']' 00:10:20.530 14:11:18 -- common/autotest_common.sh@940 -- # kill -0 64748 00:10:20.530 14:11:18 -- common/autotest_common.sh@941 -- # uname 00:10:20.530 14:11:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:20.530 14:11:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64748 00:10:20.530 killing process with pid 64748 00:10:20.530 14:11:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:20.530 14:11:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:20.530 14:11:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64748' 00:10:20.530 14:11:18 -- common/autotest_common.sh@955 -- # kill 64748 00:10:20.530 14:11:18 -- common/autotest_common.sh@960 -- # wait 64748 00:10:21.911 14:11:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:21.911 14:11:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:21.911 ************************************ 00:10:21.911 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:21.911 ************************************ 00:10:21.911 00:10:21.911 real 0m5.327s 00:10:21.911 user 0m18.859s 00:10:21.911 sys 0m0.528s 00:10:21.911 14:11:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:21.911 14:11:20 -- common/autotest_common.sh@10 -- # set +x 00:10:21.911 14:11:20 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:21.911 14:11:20 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:21.911 14:11:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:21.911 14:11:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:21.911 14:11:20 -- common/autotest_common.sh@10 -- # set +x 00:10:21.911 ************************************ 00:10:21.911 START TEST nvme_fio 00:10:21.911 ************************************ 00:10:21.911 14:11:20 -- common/autotest_common.sh@1114 -- # nvme_fio_test 00:10:21.911 14:11:20 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:21.911 14:11:20 -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:21.911 14:11:20 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:21.911 14:11:20 -- common/autotest_common.sh@1508 -- # bdfs=() 00:10:21.911 14:11:20 -- common/autotest_common.sh@1508 -- # local bdfs 00:10:21.911 14:11:20 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:21.911 14:11:20 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:21.911 14:11:20 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:10:21.911 14:11:20 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:10:21.911 14:11:20 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:21.911 14:11:20 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0' '0000:00:07.0' '0000:00:08.0' '0000:00:09.0') 00:10:21.911 14:11:20 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:21.911 14:11:20 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:21.911 14:11:20 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:10:21.911 14:11:20 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:22.170 14:11:20 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:10:22.170 14:11:20 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:22.497 14:11:20 -- nvme/nvme.sh@41 -- # bs=4096 00:10:22.497 14:11:20 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:10:22.497 14:11:20 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:10:22.497 14:11:20 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:22.497 14:11:20 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:22.497 14:11:20 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:22.497 14:11:20 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:22.497 14:11:20 -- common/autotest_common.sh@1330 -- # shift 00:10:22.497 14:11:20 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:22.497 14:11:20 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:22.497 14:11:20 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:22.497 14:11:20 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:22.497 14:11:20 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:22.497 14:11:20 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:22.497 14:11:20 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:22.497 14:11:20 -- common/autotest_common.sh@1336 -- # break 00:10:22.497 14:11:20 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:22.497 14:11:20 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:10:22.497 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:22.497 fio-3.35 00:10:22.497 Starting 1 thread 00:10:27.799 00:10:27.799 test: (groupid=0, jobs=1): err= 0: pid=64924: Tue Nov 19 14:11:26 2024 00:10:27.799 read: IOPS=21.0k, BW=82.0MiB/s (86.0MB/s)(164MiB/2001msec) 00:10:27.799 slat (nsec): min=3861, max=84743, avg=5689.12, stdev=2289.60 00:10:27.799 clat (usec): min=262, max=11305, avg=3044.84, stdev=942.14 00:10:27.799 lat (usec): min=267, max=11390, avg=3050.53, stdev=943.36 00:10:27.799 clat percentiles (usec): 00:10:27.799 | 1.00th=[ 2245], 5.00th=[ 2409], 10.00th=[ 2442], 20.00th=[ 2507], 00:10:27.799 | 30.00th=[ 2573], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2769], 00:10:27.799 | 70.00th=[ 2966], 80.00th=[ 3261], 90.00th=[ 4228], 95.00th=[ 5276], 00:10:27.799 | 99.00th=[ 6521], 99.50th=[ 6915], 99.90th=[ 8225], 99.95th=[ 8848], 00:10:27.799 | 99.99th=[11207] 00:10:27.799 bw ( KiB/s): min=80392, max=84176, per=97.88%, avg=82181.33, stdev=1900.34, samples=3 00:10:27.799 iops : min=20098, max=21044, avg=20545.33, stdev=475.08, samples=3 00:10:27.799 write: IOPS=20.9k, BW=81.5MiB/s (85.5MB/s)(163MiB/2001msec); 0 zone resets 00:10:27.799 slat (nsec): min=3984, max=77640, avg=6046.97, stdev=2401.60 00:10:27.799 clat (usec): min=200, max=11225, avg=3043.04, stdev=929.11 00:10:27.799 lat (usec): min=205, max=11241, avg=3049.08, stdev=930.31 00:10:27.799 clat percentiles (usec): 00:10:27.799 | 1.00th=[ 2245], 5.00th=[ 2409], 10.00th=[ 2474], 20.00th=[ 2507], 00:10:27.799 | 30.00th=[ 2573], 40.00th=[ 2638], 50.00th=[ 2704], 60.00th=[ 2769], 00:10:27.799 | 70.00th=[ 2966], 80.00th=[ 3261], 90.00th=[ 4228], 95.00th=[ 5211], 00:10:27.799 | 99.00th=[ 6587], 99.50th=[ 6915], 99.90th=[ 7832], 99.95th=[ 8717], 00:10:27.799 | 99.99th=[11076] 00:10:27.799 bw ( KiB/s): min=80928, max=84120, per=98.49%, avg=82248.00, stdev=1666.06, samples=3 00:10:27.799 iops : min=20232, max=21030, avg=20562.00, stdev=416.51, samples=3 00:10:27.799 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:10:27.799 lat (msec) : 2=0.17%, 4=88.52%, 10=11.23%, 20=0.04% 00:10:27.799 cpu : usr=99.00%, sys=0.25%, ctx=3, majf=0, minf=609 00:10:27.799 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:27.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.799 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:27.799 issued rwts: total=42001,41774,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.799 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:27.799 00:10:27.799 Run status group 0 (all jobs): 00:10:27.799 READ: bw=82.0MiB/s (86.0MB/s), 82.0MiB/s-82.0MiB/s (86.0MB/s-86.0MB/s), io=164MiB (172MB), run=2001-2001msec 00:10:27.799 WRITE: bw=81.5MiB/s (85.5MB/s), 81.5MiB/s-81.5MiB/s (85.5MB/s-85.5MB/s), io=163MiB (171MB), run=2001-2001msec 00:10:27.799 ----------------------------------------------------- 00:10:27.799 Suppressions used: 00:10:27.799 count bytes template 00:10:27.799 1 32 /usr/src/fio/parse.c 00:10:27.799 1 8 libtcmalloc_minimal.so 00:10:27.799 ----------------------------------------------------- 00:10:27.799 00:10:27.799 14:11:26 -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:27.799 14:11:26 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:27.799 14:11:26 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:10:27.799 14:11:26 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:28.060 14:11:26 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:10:28.060 14:11:26 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:28.321 14:11:26 -- nvme/nvme.sh@41 -- # bs=4096 00:10:28.321 14:11:26 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:10:28.321 14:11:26 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:10:28.321 14:11:26 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:28.321 14:11:26 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:28.321 14:11:26 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:28.321 14:11:26 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:28.321 14:11:26 -- common/autotest_common.sh@1330 -- # shift 00:10:28.321 14:11:26 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:28.321 14:11:26 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:28.321 14:11:26 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:28.321 14:11:26 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:28.321 14:11:26 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:28.321 14:11:26 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:28.321 14:11:26 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:28.321 14:11:26 -- common/autotest_common.sh@1336 -- # break 00:10:28.322 14:11:26 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:28.322 14:11:26 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:10:28.583 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:28.583 fio-3.35 00:10:28.583 Starting 1 thread 00:10:33.867 00:10:33.867 test: (groupid=0, jobs=1): err= 0: pid=65007: Tue Nov 19 14:11:31 2024 00:10:33.867 read: IOPS=17.9k, BW=70.0MiB/s (73.4MB/s)(140MiB/2001msec) 00:10:33.867 slat (nsec): min=3833, max=70419, avg=6677.90, stdev=2964.45 00:10:33.867 clat (usec): min=683, max=12329, avg=3538.57, stdev=1205.86 00:10:33.867 lat (usec): min=689, max=12395, avg=3545.25, stdev=1207.54 00:10:33.867 clat percentiles (usec): 00:10:33.867 | 1.00th=[ 2212], 5.00th=[ 2409], 10.00th=[ 2507], 20.00th=[ 2671], 00:10:33.867 | 30.00th=[ 2835], 40.00th=[ 2999], 50.00th=[ 3228], 60.00th=[ 3392], 00:10:33.867 | 70.00th=[ 3621], 80.00th=[ 4015], 90.00th=[ 5276], 95.00th=[ 6259], 00:10:33.867 | 99.00th=[ 7570], 99.50th=[ 8094], 99.90th=[10552], 99.95th=[11076], 00:10:33.867 | 99.99th=[11600] 00:10:33.867 bw ( KiB/s): min=59360, max=80080, per=94.17%, avg=67533.33, stdev=11030.60, samples=3 00:10:33.867 iops : min=14840, max=20020, avg=16883.33, stdev=2757.65, samples=3 00:10:33.867 write: IOPS=17.9k, BW=70.1MiB/s (73.5MB/s)(140MiB/2001msec); 0 zone resets 00:10:33.867 slat (usec): min=3, max=134, avg= 7.10, stdev= 3.12 00:10:33.867 clat (usec): min=694, max=11612, avg=3566.87, stdev=1207.62 00:10:33.867 lat (usec): min=700, max=11641, avg=3573.96, stdev=1209.34 00:10:33.867 clat percentiles (usec): 00:10:33.867 | 1.00th=[ 2212], 5.00th=[ 2442], 10.00th=[ 2540], 20.00th=[ 2704], 00:10:33.867 | 30.00th=[ 2835], 40.00th=[ 3032], 50.00th=[ 3228], 60.00th=[ 3458], 00:10:33.867 | 70.00th=[ 3654], 80.00th=[ 4080], 90.00th=[ 5342], 95.00th=[ 6325], 00:10:33.867 | 99.00th=[ 7570], 99.50th=[ 8225], 99.90th=[10683], 99.95th=[10814], 00:10:33.867 | 99.99th=[11469] 00:10:33.867 bw ( KiB/s): min=59696, max=80072, per=93.99%, avg=67440.00, stdev=11032.44, samples=3 00:10:33.867 iops : min=14924, max=20018, avg=16860.00, stdev=2758.11, samples=3 00:10:33.867 lat (usec) : 750=0.01%, 1000=0.02% 00:10:33.867 lat (msec) : 2=0.19%, 4=79.13%, 10=20.49%, 20=0.16% 00:10:33.867 cpu : usr=99.05%, sys=0.05%, ctx=5, majf=0, minf=608 00:10:33.867 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:33.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:33.867 issued rwts: total=35873,35896,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.867 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:33.867 00:10:33.867 Run status group 0 (all jobs): 00:10:33.867 READ: bw=70.0MiB/s (73.4MB/s), 70.0MiB/s-70.0MiB/s (73.4MB/s-73.4MB/s), io=140MiB (147MB), run=2001-2001msec 00:10:33.867 WRITE: bw=70.1MiB/s (73.5MB/s), 70.1MiB/s-70.1MiB/s (73.5MB/s-73.5MB/s), io=140MiB (147MB), run=2001-2001msec 00:10:33.867 ----------------------------------------------------- 00:10:33.867 Suppressions used: 00:10:33.867 count bytes template 00:10:33.867 1 32 /usr/src/fio/parse.c 00:10:33.867 1 8 libtcmalloc_minimal.so 00:10:33.867 ----------------------------------------------------- 00:10:33.867 00:10:33.867 14:11:31 -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:33.867 14:11:31 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:33.867 14:11:31 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:10:33.867 14:11:31 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:33.867 14:11:32 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:33.868 14:11:32 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:10:33.868 14:11:32 -- nvme/nvme.sh@41 -- # bs=4096 00:10:33.868 14:11:32 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:10:33.868 14:11:32 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:10:33.868 14:11:32 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:33.868 14:11:32 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:33.868 14:11:32 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:33.868 14:11:32 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:33.868 14:11:32 -- common/autotest_common.sh@1330 -- # shift 00:10:33.868 14:11:32 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:33.868 14:11:32 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:33.868 14:11:32 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:33.868 14:11:32 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:33.868 14:11:32 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:34.129 14:11:32 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:34.129 14:11:32 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:34.129 14:11:32 -- common/autotest_common.sh@1336 -- # break 00:10:34.129 14:11:32 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:34.129 14:11:32 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:10:34.129 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:34.129 fio-3.35 00:10:34.129 Starting 1 thread 00:10:39.429 00:10:39.429 test: (groupid=0, jobs=1): err= 0: pid=65088: Tue Nov 19 14:11:37 2024 00:10:39.429 read: IOPS=17.5k, BW=68.5MiB/s (71.8MB/s)(137MiB/2001msec) 00:10:39.429 slat (usec): min=4, max=123, avg= 5.66, stdev= 2.96 00:10:39.429 clat (usec): min=336, max=10942, avg=3619.69, stdev=1268.89 00:10:39.429 lat (usec): min=342, max=10984, avg=3625.35, stdev=1270.01 00:10:39.429 clat percentiles (usec): 00:10:39.429 | 1.00th=[ 2114], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2638], 00:10:39.429 | 30.00th=[ 2769], 40.00th=[ 2900], 50.00th=[ 3097], 60.00th=[ 3392], 00:10:39.429 | 70.00th=[ 3982], 80.00th=[ 4752], 90.00th=[ 5604], 95.00th=[ 6194], 00:10:39.429 | 99.00th=[ 7177], 99.50th=[ 7701], 99.90th=[ 9896], 99.95th=[10421], 00:10:39.429 | 99.99th=[10945] 00:10:39.429 bw ( KiB/s): min=70648, max=73896, per=100.00%, avg=72141.33, stdev=1639.69, samples=3 00:10:39.429 iops : min=17662, max=18474, avg=18035.33, stdev=409.92, samples=3 00:10:39.429 write: IOPS=17.5k, BW=68.5MiB/s (71.9MB/s)(137MiB/2001msec); 0 zone resets 00:10:39.429 slat (nsec): min=4278, max=80228, avg=5852.58, stdev=3012.61 00:10:39.429 clat (usec): min=358, max=10888, avg=3656.94, stdev=1275.85 00:10:39.429 lat (usec): min=365, max=10897, avg=3662.80, stdev=1276.99 00:10:39.429 clat percentiles (usec): 00:10:39.429 | 1.00th=[ 2147], 5.00th=[ 2376], 10.00th=[ 2507], 20.00th=[ 2671], 00:10:39.429 | 30.00th=[ 2802], 40.00th=[ 2933], 50.00th=[ 3130], 60.00th=[ 3425], 00:10:39.429 | 70.00th=[ 4015], 80.00th=[ 4752], 90.00th=[ 5669], 95.00th=[ 6259], 00:10:39.429 | 99.00th=[ 7242], 99.50th=[ 7767], 99.90th=[10028], 99.95th=[10421], 00:10:39.429 | 99.99th=[10814] 00:10:39.429 bw ( KiB/s): min=70968, max=73504, per=100.00%, avg=72069.33, stdev=1300.45, samples=3 00:10:39.429 iops : min=17742, max=18376, avg=18017.33, stdev=325.11, samples=3 00:10:39.429 lat (usec) : 500=0.01%, 750=0.01% 00:10:39.429 lat (msec) : 2=0.29%, 4=69.77%, 10=29.82%, 20=0.10% 00:10:39.429 cpu : usr=98.90%, sys=0.00%, ctx=3, majf=0, minf=608 00:10:39.429 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:39.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.429 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:39.429 issued rwts: total=35071,35105,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.429 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:39.429 00:10:39.429 Run status group 0 (all jobs): 00:10:39.429 READ: bw=68.5MiB/s (71.8MB/s), 68.5MiB/s-68.5MiB/s (71.8MB/s-71.8MB/s), io=137MiB (144MB), run=2001-2001msec 00:10:39.429 WRITE: bw=68.5MiB/s (71.9MB/s), 68.5MiB/s-68.5MiB/s (71.9MB/s-71.9MB/s), io=137MiB (144MB), run=2001-2001msec 00:10:39.429 ----------------------------------------------------- 00:10:39.429 Suppressions used: 00:10:39.429 count bytes template 00:10:39.429 1 32 /usr/src/fio/parse.c 00:10:39.429 1 8 libtcmalloc_minimal.so 00:10:39.429 ----------------------------------------------------- 00:10:39.429 00:10:39.429 14:11:37 -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:39.429 14:11:37 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:39.429 14:11:37 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:10:39.429 14:11:37 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:39.739 14:11:38 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:10:39.739 14:11:38 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:40.000 14:11:38 -- nvme/nvme.sh@41 -- # bs=4096 00:10:40.000 14:11:38 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:10:40.000 14:11:38 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:10:40.000 14:11:38 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:40.000 14:11:38 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:40.000 14:11:38 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:40.000 14:11:38 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:40.000 14:11:38 -- common/autotest_common.sh@1330 -- # shift 00:10:40.000 14:11:38 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:40.000 14:11:38 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:40.000 14:11:38 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:40.000 14:11:38 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:40.000 14:11:38 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:40.000 14:11:38 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:40.000 14:11:38 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:40.000 14:11:38 -- common/autotest_common.sh@1336 -- # break 00:10:40.000 14:11:38 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:40.000 14:11:38 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:10:40.000 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:40.000 fio-3.35 00:10:40.000 Starting 1 thread 00:10:48.142 00:10:48.142 test: (groupid=0, jobs=1): err= 0: pid=65166: Tue Nov 19 14:11:45 2024 00:10:48.142 read: IOPS=15.4k, BW=60.2MiB/s (63.1MB/s)(120MiB/2001msec) 00:10:48.142 slat (usec): min=4, max=116, avg= 6.86, stdev= 3.44 00:10:48.142 clat (usec): min=1241, max=12277, avg=4114.93, stdev=1399.71 00:10:48.142 lat (usec): min=1248, max=12341, avg=4121.79, stdev=1401.05 00:10:48.142 clat percentiles (usec): 00:10:48.142 | 1.00th=[ 2245], 5.00th=[ 2540], 10.00th=[ 2737], 20.00th=[ 2966], 00:10:48.142 | 30.00th=[ 3163], 40.00th=[ 3392], 50.00th=[ 3654], 60.00th=[ 4047], 00:10:48.142 | 70.00th=[ 4621], 80.00th=[ 5276], 90.00th=[ 6194], 95.00th=[ 6915], 00:10:48.142 | 99.00th=[ 8160], 99.50th=[ 8586], 99.90th=[ 9634], 99.95th=[ 9896], 00:10:48.142 | 99.99th=[12125] 00:10:48.142 bw ( KiB/s): min=59880, max=72680, per=100.00%, avg=64621.33, stdev=7015.24, samples=3 00:10:48.142 iops : min=14970, max=18170, avg=16155.33, stdev=1753.81, samples=3 00:10:48.142 write: IOPS=15.4k, BW=60.2MiB/s (63.1MB/s)(120MiB/2001msec); 0 zone resets 00:10:48.142 slat (nsec): min=4293, max=95609, avg=7203.59, stdev=3499.91 00:10:48.142 clat (usec): min=839, max=12197, avg=4160.12, stdev=1408.73 00:10:48.142 lat (usec): min=863, max=12219, avg=4167.32, stdev=1410.06 00:10:48.142 clat percentiles (usec): 00:10:48.142 | 1.00th=[ 2278], 5.00th=[ 2573], 10.00th=[ 2769], 20.00th=[ 2999], 00:10:48.142 | 30.00th=[ 3195], 40.00th=[ 3425], 50.00th=[ 3687], 60.00th=[ 4146], 00:10:48.142 | 70.00th=[ 4686], 80.00th=[ 5342], 90.00th=[ 6259], 95.00th=[ 6980], 00:10:48.142 | 99.00th=[ 8225], 99.50th=[ 8586], 99.90th=[ 9634], 99.95th=[10159], 00:10:48.142 | 99.99th=[12125] 00:10:48.142 bw ( KiB/s): min=60224, max=72392, per=100.00%, avg=64357.33, stdev=6959.19, samples=3 00:10:48.142 iops : min=15056, max=18098, avg=16089.33, stdev=1739.80, samples=3 00:10:48.142 lat (usec) : 1000=0.01% 00:10:48.142 lat (msec) : 2=0.27%, 4=57.90%, 10=41.78%, 20=0.05% 00:10:48.142 cpu : usr=98.65%, sys=0.25%, ctx=2, majf=0, minf=607 00:10:48.142 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:48.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:48.142 issued rwts: total=30821,30845,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.142 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:48.142 00:10:48.142 Run status group 0 (all jobs): 00:10:48.142 READ: bw=60.2MiB/s (63.1MB/s), 60.2MiB/s-60.2MiB/s (63.1MB/s-63.1MB/s), io=120MiB (126MB), run=2001-2001msec 00:10:48.142 WRITE: bw=60.2MiB/s (63.1MB/s), 60.2MiB/s-60.2MiB/s (63.1MB/s-63.1MB/s), io=120MiB (126MB), run=2001-2001msec 00:10:48.142 ----------------------------------------------------- 00:10:48.142 Suppressions used: 00:10:48.142 count bytes template 00:10:48.142 1 32 /usr/src/fio/parse.c 00:10:48.142 1 8 libtcmalloc_minimal.so 00:10:48.142 ----------------------------------------------------- 00:10:48.142 00:10:48.142 ************************************ 00:10:48.142 END TEST nvme_fio 00:10:48.142 ************************************ 00:10:48.142 14:11:45 -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:48.142 14:11:45 -- nvme/nvme.sh@46 -- # true 00:10:48.142 00:10:48.142 real 0m25.357s 00:10:48.142 user 0m19.757s 00:10:48.142 sys 0m7.339s 00:10:48.142 14:11:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:48.142 14:11:45 -- common/autotest_common.sh@10 -- # set +x 00:10:48.142 ************************************ 00:10:48.142 END TEST nvme 00:10:48.142 ************************************ 00:10:48.142 00:10:48.142 real 1m38.764s 00:10:48.142 user 3m43.705s 00:10:48.142 sys 0m17.914s 00:10:48.142 14:11:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:48.142 14:11:45 -- common/autotest_common.sh@10 -- # set +x 00:10:48.142 14:11:45 -- spdk/autotest.sh@210 -- # [[ 0 -eq 1 ]] 00:10:48.143 14:11:45 -- spdk/autotest.sh@214 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:48.143 14:11:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:48.143 14:11:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:48.143 14:11:45 -- common/autotest_common.sh@10 -- # set +x 00:10:48.143 ************************************ 00:10:48.143 START TEST nvme_scc 00:10:48.143 ************************************ 00:10:48.143 14:11:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:48.143 * Looking for test storage... 00:10:48.143 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:48.143 14:11:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:48.143 14:11:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:48.143 14:11:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:48.143 14:11:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:48.143 14:11:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:48.143 14:11:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:48.143 14:11:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:48.143 14:11:45 -- scripts/common.sh@335 -- # IFS=.-: 00:10:48.143 14:11:45 -- scripts/common.sh@335 -- # read -ra ver1 00:10:48.143 14:11:45 -- scripts/common.sh@336 -- # IFS=.-: 00:10:48.143 14:11:45 -- scripts/common.sh@336 -- # read -ra ver2 00:10:48.143 14:11:45 -- scripts/common.sh@337 -- # local 'op=<' 00:10:48.143 14:11:45 -- scripts/common.sh@339 -- # ver1_l=2 00:10:48.143 14:11:45 -- scripts/common.sh@340 -- # ver2_l=1 00:10:48.143 14:11:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:48.143 14:11:45 -- scripts/common.sh@343 -- # case "$op" in 00:10:48.143 14:11:45 -- scripts/common.sh@344 -- # : 1 00:10:48.143 14:11:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:48.143 14:11:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:48.143 14:11:45 -- scripts/common.sh@364 -- # decimal 1 00:10:48.143 14:11:45 -- scripts/common.sh@352 -- # local d=1 00:10:48.143 14:11:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:48.143 14:11:45 -- scripts/common.sh@354 -- # echo 1 00:10:48.143 14:11:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:48.143 14:11:45 -- scripts/common.sh@365 -- # decimal 2 00:10:48.143 14:11:45 -- scripts/common.sh@352 -- # local d=2 00:10:48.143 14:11:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:48.143 14:11:45 -- scripts/common.sh@354 -- # echo 2 00:10:48.143 14:11:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:48.143 14:11:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:48.143 14:11:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:48.143 14:11:45 -- scripts/common.sh@367 -- # return 0 00:10:48.143 14:11:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:48.143 14:11:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:48.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.143 --rc genhtml_branch_coverage=1 00:10:48.143 --rc genhtml_function_coverage=1 00:10:48.143 --rc genhtml_legend=1 00:10:48.143 --rc geninfo_all_blocks=1 00:10:48.143 --rc geninfo_unexecuted_blocks=1 00:10:48.143 00:10:48.143 ' 00:10:48.143 14:11:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:48.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.143 --rc genhtml_branch_coverage=1 00:10:48.143 --rc genhtml_function_coverage=1 00:10:48.143 --rc genhtml_legend=1 00:10:48.143 --rc geninfo_all_blocks=1 00:10:48.143 --rc geninfo_unexecuted_blocks=1 00:10:48.143 00:10:48.143 ' 00:10:48.143 14:11:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:48.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.143 --rc genhtml_branch_coverage=1 00:10:48.143 --rc genhtml_function_coverage=1 00:10:48.143 --rc genhtml_legend=1 00:10:48.143 --rc geninfo_all_blocks=1 00:10:48.143 --rc geninfo_unexecuted_blocks=1 00:10:48.143 00:10:48.143 ' 00:10:48.143 14:11:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:48.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.143 --rc genhtml_branch_coverage=1 00:10:48.143 --rc genhtml_function_coverage=1 00:10:48.143 --rc genhtml_legend=1 00:10:48.143 --rc geninfo_all_blocks=1 00:10:48.143 --rc geninfo_unexecuted_blocks=1 00:10:48.143 00:10:48.143 ' 00:10:48.143 14:11:45 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:48.143 14:11:45 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:48.143 14:11:45 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:48.143 14:11:45 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:48.143 14:11:45 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:48.143 14:11:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:48.143 14:11:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:48.143 14:11:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:48.143 14:11:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.143 14:11:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.143 14:11:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.143 14:11:45 -- paths/export.sh@5 -- # export PATH 00:10:48.143 14:11:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.143 14:11:45 -- nvme/functions.sh@10 -- # ctrls=() 00:10:48.143 14:11:45 -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:48.143 14:11:45 -- nvme/functions.sh@11 -- # nvmes=() 00:10:48.143 14:11:45 -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:48.143 14:11:45 -- nvme/functions.sh@12 -- # bdfs=() 00:10:48.143 14:11:45 -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:48.143 14:11:45 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:48.143 14:11:45 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:48.143 14:11:45 -- nvme/functions.sh@14 -- # nvme_name= 00:10:48.143 14:11:45 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:48.143 14:11:45 -- nvme/nvme_scc.sh@12 -- # uname 00:10:48.143 14:11:45 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:48.143 14:11:45 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:48.143 14:11:45 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:48.143 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:48.143 Waiting for block devices as requested 00:10:48.143 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:10:48.143 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:10:48.143 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:10:48.143 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:10:53.452 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:10:53.452 14:11:51 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:53.452 14:11:51 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:53.452 14:11:51 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:53.452 14:11:51 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:53.452 14:11:51 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:10:53.452 14:11:51 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:10:53.452 14:11:51 -- scripts/common.sh@15 -- # local i 00:10:53.452 14:11:51 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:10:53.452 14:11:51 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:53.452 14:11:51 -- scripts/common.sh@24 -- # return 0 00:10:53.452 14:11:51 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:53.452 14:11:51 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:53.452 14:11:51 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:53.452 14:11:51 -- nvme/functions.sh@18 -- # shift 00:10:53.452 14:11:51 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:53.452 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.452 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.452 14:11:51 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:53.452 14:11:51 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:53.452 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.452 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.452 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:53.452 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:53.452 14:11:51 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:53.452 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.452 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.452 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:53.452 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:53.452 14:11:51 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:53.452 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.452 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.452 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:53.452 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:10:53.452 14:11:51 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:10:53.452 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.452 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.452 14:11:51 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:53.452 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:53.452 14:11:51 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:53.452 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.452 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.452 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:53.452 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:53.452 14:11:51 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:53.452 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.452 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.452 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:53.452 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:53.452 14:11:51 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:53.452 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.452 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.452 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:53.452 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:53.452 14:11:51 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.453 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.453 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.453 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.453 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.453 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.453 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.453 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.453 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.453 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.453 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.453 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.453 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.453 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.453 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.453 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.453 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.453 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.453 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.453 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.453 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.453 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.453 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.453 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.453 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.453 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.453 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.453 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.453 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:53.453 14:11:51 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.453 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.453 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.454 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.454 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.454 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.454 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.454 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.454 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.454 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.454 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.454 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.454 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.454 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.454 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.454 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.454 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.454 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.454 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.454 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.454 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.454 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.454 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.454 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.454 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.454 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.454 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.454 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.454 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.454 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.454 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.454 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.454 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:53.454 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.455 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.455 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.455 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.455 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.455 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.455 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.455 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.455 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.455 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.455 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.455 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.455 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.455 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.455 14:11:51 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.455 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.455 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.455 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.455 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.455 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.455 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.455 14:11:51 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.455 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.455 14:11:51 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:53.455 14:11:51 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.455 14:11:51 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:53.455 14:11:51 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:53.455 14:11:51 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:53.455 14:11:51 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:10:53.455 14:11:51 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:53.455 14:11:51 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:53.455 14:11:51 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:53.455 14:11:51 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:10:53.455 14:11:51 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:10:53.455 14:11:51 -- scripts/common.sh@15 -- # local i 00:10:53.455 14:11:51 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:10:53.455 14:11:51 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:53.455 14:11:51 -- scripts/common.sh@24 -- # return 0 00:10:53.455 14:11:51 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:53.455 14:11:51 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:53.455 14:11:51 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:53.455 14:11:51 -- nvme/functions.sh@18 -- # shift 00:10:53.455 14:11:51 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:53.455 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.456 14:11:51 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:53.456 14:11:51 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.456 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.456 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.456 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.456 14:11:51 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.456 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.456 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.456 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.456 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.456 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.456 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.456 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.456 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.456 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.456 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.456 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.456 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.456 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.456 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.456 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.456 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.456 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.456 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.456 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.456 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.456 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.456 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.456 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.456 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:53.456 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.457 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.457 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.457 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.457 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.457 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.457 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.457 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.457 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.457 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.457 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.457 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.457 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.457 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.457 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.457 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.457 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.457 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.457 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.457 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.457 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.457 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.457 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.457 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.457 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.457 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.457 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.457 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.457 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.457 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:53.458 14:11:51 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:53.458 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.458 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.458 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.458 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:53.458 14:11:51 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:53.458 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.458 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.458 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.458 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:53.458 14:11:51 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:53.458 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.458 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.458 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.458 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:53.458 14:11:51 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:53.458 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.458 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.458 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.458 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:53.458 14:11:51 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:53.458 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.458 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.458 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.458 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:53.458 14:11:51 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:53.458 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.458 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.458 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.458 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:53.458 14:11:51 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:53.458 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.458 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.458 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:53.458 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:53.458 14:11:51 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:53.458 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.458 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.458 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:53.458 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:53.458 14:11:51 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:53.458 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.458 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.458 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.458 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:53.458 14:11:51 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:53.458 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.458 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.458 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:53.458 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:53.458 14:11:51 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:53.458 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.458 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.458 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:53.458 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:53.458 14:11:51 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:53.458 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.458 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.458 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.458 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:53.458 14:11:51 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:53.458 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.458 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.458 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.458 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:53.458 14:11:51 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:53.458 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.458 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.458 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:53.458 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:53.458 14:11:51 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:53.458 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.458 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.458 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.458 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:53.458 14:11:51 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:53.458 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.458 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.458 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.458 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:53.458 14:11:51 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:53.458 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.458 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.458 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.458 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:53.459 14:11:51 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.459 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.459 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:53.459 14:11:51 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.459 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.459 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:53.459 14:11:51 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.459 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:53.459 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:53.459 14:11:51 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.459 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:53.459 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:53.459 14:11:51 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.459 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.459 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:53.459 14:11:51 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.459 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.459 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:53.459 14:11:51 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.459 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.459 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:53.459 14:11:51 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.459 14:11:51 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:53.459 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:53.459 14:11:51 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.459 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.459 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:53.459 14:11:51 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.459 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.459 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:53.459 14:11:51 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.459 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.459 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:53.459 14:11:51 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.459 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.459 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:53.459 14:11:51 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.459 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.459 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:53.459 14:11:51 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.459 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.459 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:53.459 14:11:51 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.459 14:11:51 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:53.459 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:53.459 14:11:51 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.459 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:53.459 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:53.459 14:11:51 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.459 14:11:51 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:53.459 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:53.459 14:11:51 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.459 14:11:51 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:53.459 14:11:51 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:53.459 14:11:51 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:53.459 14:11:51 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:53.459 14:11:51 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:53.459 14:11:51 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:53.459 14:11:51 -- nvme/functions.sh@18 -- # shift 00:10:53.459 14:11:51 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.459 14:11:51 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:53.459 14:11:51 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.459 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:53.459 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:10:53.459 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.459 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:53.459 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:10:53.459 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.459 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.460 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.460 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.460 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.460 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.460 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.460 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.460 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.460 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.460 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.460 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.460 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.460 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.460 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.460 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.460 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.460 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.460 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.460 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.460 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.460 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.460 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.460 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.460 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.460 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:53.460 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.461 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.461 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:53.461 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.461 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:53.461 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:53.461 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.461 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:53.461 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:53.461 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.461 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:53.461 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:53.461 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.461 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.461 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:53.461 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.461 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.461 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:53.461 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.461 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.461 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:53.461 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.461 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.461 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:53.461 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.461 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.461 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:53.461 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.461 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:53.461 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:53.461 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.461 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:53.461 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:53.461 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.461 14:11:51 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:53.461 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:53.461 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.461 14:11:51 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:53.461 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:53.461 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.461 14:11:51 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:53.461 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:53.461 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.461 14:11:51 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:53.461 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:53.461 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.461 14:11:51 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:53.461 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:53.461 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.461 14:11:51 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:53.461 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:53.461 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.461 14:11:51 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:53.461 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:53.461 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.461 14:11:51 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:53.461 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:53.461 14:11:51 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.461 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.461 14:11:51 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:53.461 14:11:51 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:53.461 14:11:51 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:10:53.461 14:11:51 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:10:53.461 14:11:51 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:10:53.461 14:11:51 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:10:53.462 14:11:51 -- nvme/functions.sh@18 -- # shift 00:10:53.462 14:11:51 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.462 14:11:51 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:10:53.462 14:11:51 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.462 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.462 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.462 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.462 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.462 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.462 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.462 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.462 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.462 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.462 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.462 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.462 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.462 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.462 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.462 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.462 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.462 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.462 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.462 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.462 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.462 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.462 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.462 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.462 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.462 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.462 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.462 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.462 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.462 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.462 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.462 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.462 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.462 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:10:53.462 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.463 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.463 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.463 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.463 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.463 14:11:51 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.463 14:11:51 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.463 14:11:51 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.463 14:11:51 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.463 14:11:51 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.463 14:11:51 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.463 14:11:51 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.463 14:11:51 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.463 14:11:51 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:10:53.463 14:11:51 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:53.463 14:11:51 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:10:53.463 14:11:51 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:10:53.463 14:11:51 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:10:53.463 14:11:51 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:10:53.463 14:11:51 -- nvme/functions.sh@18 -- # shift 00:10:53.463 14:11:51 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.463 14:11:51 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:10:53.463 14:11:51 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.463 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.463 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.463 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.463 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.463 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.463 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.463 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.463 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.463 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.463 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.463 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.463 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.463 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.463 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.463 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.463 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.463 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.463 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:10:53.463 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.464 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.464 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.464 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.464 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.464 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.464 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.464 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.464 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.464 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.464 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.464 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.464 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.464 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.464 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.464 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.464 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.464 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.464 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.464 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.464 14:11:51 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.464 14:11:51 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.464 14:11:51 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.464 14:11:51 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.464 14:11:51 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.464 14:11:51 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.464 14:11:51 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.464 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.464 14:11:51 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:53.464 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.465 14:11:51 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:10:53.465 14:11:51 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:53.465 14:11:51 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:53.465 14:11:51 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:10:53.465 14:11:51 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:53.465 14:11:51 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:53.465 14:11:51 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:53.465 14:11:51 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:10:53.465 14:11:51 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:10:53.465 14:11:51 -- scripts/common.sh@15 -- # local i 00:10:53.465 14:11:51 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:10:53.465 14:11:51 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:53.465 14:11:51 -- scripts/common.sh@24 -- # return 0 00:10:53.465 14:11:51 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:53.465 14:11:51 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:53.465 14:11:51 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:53.465 14:11:51 -- nvme/functions.sh@18 -- # shift 00:10:53.465 14:11:51 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.465 14:11:51 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:53.465 14:11:51 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.465 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.465 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.465 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.465 14:11:51 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.465 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.465 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.465 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.465 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.465 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.465 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.465 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.465 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.465 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.465 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.465 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.465 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.465 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.465 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.465 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.465 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.465 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.465 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.465 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.465 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.465 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.465 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.465 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.465 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.465 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:53.465 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.466 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.466 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.466 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.466 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.466 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.466 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.466 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.466 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.466 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.466 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.466 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.466 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.466 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.466 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.466 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.466 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.466 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.466 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.466 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.466 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.466 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.466 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.466 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.466 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.466 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.466 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.466 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.466 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.466 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.466 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.466 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.466 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.466 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.466 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.466 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:53.466 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.467 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.467 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.467 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.467 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.467 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.467 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.467 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.467 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.467 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.467 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.467 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.467 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.467 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.467 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.467 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.467 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.467 14:11:51 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.467 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.467 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.467 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.467 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.467 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.467 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.467 14:11:51 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.467 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.467 14:11:51 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.467 14:11:51 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:53.467 14:11:51 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:53.467 14:11:51 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:53.467 14:11:51 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:53.467 14:11:51 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:53.467 14:11:51 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:53.467 14:11:51 -- nvme/functions.sh@18 -- # shift 00:10:53.467 14:11:51 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.467 14:11:51 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:53.467 14:11:51 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.467 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.467 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.467 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.467 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.467 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:53.467 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.468 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.468 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.468 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.468 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.468 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.468 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.468 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.468 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.468 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.468 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.468 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.468 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.468 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.468 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.468 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.468 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.468 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.468 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.468 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.468 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.468 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.468 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.468 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.468 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.468 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.468 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.468 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.468 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.468 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.468 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.468 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.468 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:53.468 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.468 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.468 14:11:51 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:53.469 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:53.469 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:53.469 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.469 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.469 14:11:51 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:53.469 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:53.469 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:53.469 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.469 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.469 14:11:51 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:53.469 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:53.469 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:53.469 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.469 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.469 14:11:51 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:53.469 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:53.469 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:53.469 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.469 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.469 14:11:51 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:53.469 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:53.469 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:53.469 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.469 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.469 14:11:51 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:53.469 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:53.469 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:53.469 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.469 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.469 14:11:51 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:53.469 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:53.469 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:53.469 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.469 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.469 14:11:51 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:53.469 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:53.469 14:11:51 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:53.469 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.469 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.469 14:11:51 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:53.469 14:11:51 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:53.469 14:11:51 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:53.469 14:11:51 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:10:53.469 14:11:51 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:53.469 14:11:51 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:53.469 14:11:51 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:53.469 14:11:51 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:10:53.469 14:11:51 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:10:53.469 14:11:51 -- scripts/common.sh@15 -- # local i 00:10:53.469 14:11:51 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:10:53.469 14:11:51 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:53.469 14:11:51 -- scripts/common.sh@24 -- # return 0 00:10:53.469 14:11:51 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:53.469 14:11:51 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:53.469 14:11:51 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:53.469 14:11:51 -- nvme/functions.sh@18 -- # shift 00:10:53.469 14:11:51 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:53.469 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.469 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.469 14:11:51 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:53.469 14:11:51 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:53.469 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.469 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.469 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:53.469 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:53.469 14:11:51 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:53.469 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.469 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.469 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:53.469 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:53.469 14:11:51 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:53.469 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.469 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.469 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:53.469 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:10:53.469 14:11:51 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:10:53.469 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.469 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.469 14:11:51 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:53.469 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:53.469 14:11:51 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:53.469 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.469 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.469 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:53.469 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:53.469 14:11:51 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:53.469 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.469 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.469 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:53.469 14:11:51 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:53.469 14:11:51 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:53.469 14:11:51 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.469 14:11:51 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.469 14:11:51 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:53.469 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:53.469 14:11:52 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:53.469 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.469 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.469 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.469 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:10:53.469 14:11:52 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:10:53.780 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.780 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.780 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:53.780 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:53.780 14:11:52 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:53.780 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.780 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.780 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.780 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:53.780 14:11:52 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:53.780 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.780 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.780 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:53.780 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:53.780 14:11:52 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:53.780 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.780 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.780 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.780 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:53.780 14:11:52 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:53.780 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.780 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.780 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.780 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:53.780 14:11:52 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:53.780 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.780 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.780 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:53.780 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:53.780 14:11:52 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:53.780 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.780 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.780 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:53.780 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:10:53.780 14:11:52 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:10:53.780 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.780 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.780 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.780 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:53.780 14:11:52 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:53.780 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.780 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.780 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:53.780 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:53.780 14:11:52 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:53.780 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.780 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.780 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:53.780 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:53.780 14:11:52 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:53.780 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.780 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.780 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.780 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:53.780 14:11:52 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:53.780 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.780 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.780 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.780 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:53.780 14:11:52 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:53.780 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.780 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.780 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.780 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:53.780 14:11:52 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:53.780 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.780 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.780 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.781 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.781 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.781 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.781 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.781 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.781 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.781 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.781 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.781 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.781 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.781 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.781 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.781 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.781 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.781 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.781 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.781 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.781 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.781 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.781 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.781 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.781 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.781 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.781 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.781 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.781 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.781 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.781 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.781 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.781 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.781 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.781 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.781 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:53.781 14:11:52 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:53.781 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.782 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.782 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.782 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.782 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.782 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.782 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.782 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.782 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.782 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.782 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.782 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.782 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.782 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.782 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.782 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.782 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.782 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.782 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.782 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.782 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.782 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.782 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.782 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.782 14:11:52 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.782 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.782 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.782 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.782 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.782 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.782 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.782 14:11:52 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:53.782 14:11:52 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:53.782 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.783 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.783 14:11:52 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.783 14:11:52 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:53.783 14:11:52 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:53.783 14:11:52 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:10:53.783 14:11:52 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:10:53.783 14:11:52 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:10:53.783 14:11:52 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:10:53.783 14:11:52 -- nvme/functions.sh@18 -- # shift 00:10:53.783 14:11:52 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.783 14:11:52 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:10:53.783 14:11:52 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.783 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.783 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.783 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.783 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.783 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.783 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.783 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.783 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.783 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.783 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.783 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.783 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.783 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.783 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.783 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.783 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.783 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.783 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.783 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.783 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.783 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.783 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.783 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.783 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.783 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.783 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.783 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.783 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.783 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:53.783 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:10:53.784 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:10:53.784 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.784 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.784 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:53.784 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:10:53.784 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:10:53.784 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.784 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.784 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.784 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:10:53.784 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:10:53.784 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.784 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.784 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.784 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:10:53.784 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:10:53.784 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.784 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.784 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.784 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:10:53.784 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:10:53.784 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.784 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.784 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.784 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:10:53.784 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:10:53.784 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.784 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.784 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.784 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:10:53.784 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:10:53.784 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.784 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.784 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:53.784 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:10:53.784 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:10:53.784 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.784 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.784 14:11:52 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:53.784 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:10:53.784 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:10:53.784 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.784 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.784 14:11:52 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:53.784 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:53.784 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:53.784 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.784 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.784 14:11:52 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:53.784 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:53.784 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:53.784 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.784 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.784 14:11:52 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:53.784 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:53.784 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:53.784 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.784 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.784 14:11:52 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:53.784 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:53.784 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:53.784 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.784 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.784 14:11:52 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:53.784 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:53.784 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:53.784 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.784 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.784 14:11:52 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:53.784 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:53.784 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:53.784 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.784 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.784 14:11:52 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:53.784 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:53.784 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:53.784 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.784 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.784 14:11:52 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:53.784 14:11:52 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:53.784 14:11:52 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:53.784 14:11:52 -- nvme/functions.sh@21 -- # IFS=: 00:10:53.784 14:11:52 -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.784 14:11:52 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:10:53.784 14:11:52 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:53.784 14:11:52 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:53.784 14:11:52 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:10:53.784 14:11:52 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:53.784 14:11:52 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:53.784 14:11:52 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:53.784 14:11:52 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:10:53.784 14:11:52 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:53.784 14:11:52 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:10:53.784 14:11:52 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:10:53.784 14:11:52 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:10:53.784 14:11:52 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:10:53.784 14:11:52 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:10:53.784 14:11:52 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:53.784 14:11:52 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:10:53.784 14:11:52 -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:10:53.784 14:11:52 -- nvme/functions.sh@184 -- # get_oncs nvme1 00:10:53.784 14:11:52 -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:10:53.784 14:11:52 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:53.784 14:11:52 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:53.784 14:11:52 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:53.784 14:11:52 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:53.784 14:11:52 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:53.784 14:11:52 -- nvme/functions.sh@76 -- # echo 0x15d 00:10:53.784 14:11:52 -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:53.784 14:11:52 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:53.784 14:11:52 -- nvme/functions.sh@197 -- # echo nvme1 00:10:53.784 14:11:52 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:53.784 14:11:52 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:10:53.784 14:11:52 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:10:53.784 14:11:52 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:10:53.784 14:11:52 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:10:53.784 14:11:52 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:53.784 14:11:52 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:53.784 14:11:52 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:53.784 14:11:52 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:53.784 14:11:52 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:53.784 14:11:52 -- nvme/functions.sh@76 -- # echo 0x15d 00:10:53.784 14:11:52 -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:53.784 14:11:52 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:53.784 14:11:52 -- nvme/functions.sh@197 -- # echo nvme0 00:10:53.784 14:11:52 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:53.784 14:11:52 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:10:53.784 14:11:52 -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:10:53.784 14:11:52 -- nvme/functions.sh@184 -- # get_oncs nvme3 00:10:53.784 14:11:52 -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:10:53.784 14:11:52 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:53.784 14:11:52 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:53.784 14:11:52 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:53.784 14:11:52 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:53.784 14:11:52 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:53.784 14:11:52 -- nvme/functions.sh@76 -- # echo 0x15d 00:10:53.784 14:11:52 -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:53.784 14:11:52 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:53.784 14:11:52 -- nvme/functions.sh@197 -- # echo nvme3 00:10:53.784 14:11:52 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:53.784 14:11:52 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:10:53.784 14:11:52 -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:10:53.784 14:11:52 -- nvme/functions.sh@184 -- # get_oncs nvme2 00:10:53.784 14:11:52 -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:10:53.784 14:11:52 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:53.784 14:11:52 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:53.784 14:11:52 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:53.784 14:11:52 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:53.784 14:11:52 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:53.784 14:11:52 -- nvme/functions.sh@76 -- # echo 0x15d 00:10:53.785 14:11:52 -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:53.785 14:11:52 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:53.785 14:11:52 -- nvme/functions.sh@197 -- # echo nvme2 00:10:53.785 14:11:52 -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:10:53.785 14:11:52 -- nvme/functions.sh@206 -- # echo nvme1 00:10:53.785 14:11:52 -- nvme/functions.sh@207 -- # return 0 00:10:53.785 14:11:52 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:53.785 14:11:52 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:08.0 00:10:53.785 14:11:52 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:54.724 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:54.724 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:10:54.724 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:10:54.724 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:10:54.724 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:10:54.724 14:11:53 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:10:54.724 14:11:53 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:54.724 14:11:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:54.724 14:11:53 -- common/autotest_common.sh@10 -- # set +x 00:10:54.724 ************************************ 00:10:54.724 START TEST nvme_simple_copy 00:10:54.724 ************************************ 00:10:54.724 14:11:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:10:54.985 Initializing NVMe Controllers 00:10:54.985 Attaching to 0000:00:08.0 00:10:54.985 Controller supports SCC. Attached to 0000:00:08.0 00:10:54.985 Namespace ID: 1 size: 4GB 00:10:54.985 Initialization complete. 00:10:54.985 00:10:54.985 Controller QEMU NVMe Ctrl (12342 ) 00:10:54.985 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:54.985 Namespace Block Size:4096 00:10:54.985 Writing LBAs 0 to 63 with Random Data 00:10:54.985 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:54.985 LBAs matching Written Data: 64 00:10:55.245 00:10:55.245 real 0m0.288s 00:10:55.245 user 0m0.109s 00:10:55.245 sys 0m0.077s 00:10:55.245 14:11:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:55.245 14:11:53 -- common/autotest_common.sh@10 -- # set +x 00:10:55.245 ************************************ 00:10:55.245 END TEST nvme_simple_copy 00:10:55.245 ************************************ 00:10:55.245 00:10:55.245 real 0m7.834s 00:10:55.245 user 0m1.114s 00:10:55.245 sys 0m1.516s 00:10:55.245 14:11:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:55.245 14:11:53 -- common/autotest_common.sh@10 -- # set +x 00:10:55.245 ************************************ 00:10:55.245 END TEST nvme_scc 00:10:55.245 ************************************ 00:10:55.245 14:11:53 -- spdk/autotest.sh@216 -- # [[ 0 -eq 1 ]] 00:10:55.245 14:11:53 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:10:55.245 14:11:53 -- spdk/autotest.sh@222 -- # [[ '' -eq 1 ]] 00:10:55.245 14:11:53 -- spdk/autotest.sh@225 -- # [[ 1 -eq 1 ]] 00:10:55.245 14:11:53 -- spdk/autotest.sh@226 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:55.245 14:11:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:55.245 14:11:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:55.245 14:11:53 -- common/autotest_common.sh@10 -- # set +x 00:10:55.245 ************************************ 00:10:55.245 START TEST nvme_fdp 00:10:55.245 ************************************ 00:10:55.245 14:11:53 -- common/autotest_common.sh@1114 -- # test/nvme/nvme_fdp.sh 00:10:55.245 * Looking for test storage... 00:10:55.245 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:55.246 14:11:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:55.246 14:11:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:55.246 14:11:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:55.506 14:11:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:55.506 14:11:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:55.506 14:11:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:55.506 14:11:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:55.506 14:11:53 -- scripts/common.sh@335 -- # IFS=.-: 00:10:55.506 14:11:53 -- scripts/common.sh@335 -- # read -ra ver1 00:10:55.506 14:11:53 -- scripts/common.sh@336 -- # IFS=.-: 00:10:55.506 14:11:53 -- scripts/common.sh@336 -- # read -ra ver2 00:10:55.506 14:11:53 -- scripts/common.sh@337 -- # local 'op=<' 00:10:55.506 14:11:53 -- scripts/common.sh@339 -- # ver1_l=2 00:10:55.506 14:11:53 -- scripts/common.sh@340 -- # ver2_l=1 00:10:55.506 14:11:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:55.506 14:11:53 -- scripts/common.sh@343 -- # case "$op" in 00:10:55.506 14:11:53 -- scripts/common.sh@344 -- # : 1 00:10:55.506 14:11:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:55.506 14:11:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:55.506 14:11:53 -- scripts/common.sh@364 -- # decimal 1 00:10:55.506 14:11:53 -- scripts/common.sh@352 -- # local d=1 00:10:55.506 14:11:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:55.506 14:11:53 -- scripts/common.sh@354 -- # echo 1 00:10:55.506 14:11:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:55.506 14:11:53 -- scripts/common.sh@365 -- # decimal 2 00:10:55.506 14:11:53 -- scripts/common.sh@352 -- # local d=2 00:10:55.506 14:11:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:55.506 14:11:53 -- scripts/common.sh@354 -- # echo 2 00:10:55.506 14:11:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:55.506 14:11:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:55.506 14:11:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:55.506 14:11:53 -- scripts/common.sh@367 -- # return 0 00:10:55.506 14:11:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:55.506 14:11:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:55.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.506 --rc genhtml_branch_coverage=1 00:10:55.506 --rc genhtml_function_coverage=1 00:10:55.506 --rc genhtml_legend=1 00:10:55.506 --rc geninfo_all_blocks=1 00:10:55.506 --rc geninfo_unexecuted_blocks=1 00:10:55.506 00:10:55.506 ' 00:10:55.506 14:11:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:55.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.506 --rc genhtml_branch_coverage=1 00:10:55.506 --rc genhtml_function_coverage=1 00:10:55.506 --rc genhtml_legend=1 00:10:55.506 --rc geninfo_all_blocks=1 00:10:55.506 --rc geninfo_unexecuted_blocks=1 00:10:55.506 00:10:55.506 ' 00:10:55.506 14:11:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:55.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.506 --rc genhtml_branch_coverage=1 00:10:55.506 --rc genhtml_function_coverage=1 00:10:55.506 --rc genhtml_legend=1 00:10:55.506 --rc geninfo_all_blocks=1 00:10:55.506 --rc geninfo_unexecuted_blocks=1 00:10:55.506 00:10:55.506 ' 00:10:55.506 14:11:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:55.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.506 --rc genhtml_branch_coverage=1 00:10:55.506 --rc genhtml_function_coverage=1 00:10:55.506 --rc genhtml_legend=1 00:10:55.506 --rc geninfo_all_blocks=1 00:10:55.506 --rc geninfo_unexecuted_blocks=1 00:10:55.506 00:10:55.506 ' 00:10:55.506 14:11:53 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:55.506 14:11:53 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:55.506 14:11:53 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:55.506 14:11:53 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:55.506 14:11:53 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:55.506 14:11:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.506 14:11:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.506 14:11:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.506 14:11:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.506 14:11:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.506 14:11:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.506 14:11:53 -- paths/export.sh@5 -- # export PATH 00:10:55.506 14:11:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.506 14:11:53 -- nvme/functions.sh@10 -- # ctrls=() 00:10:55.506 14:11:53 -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:55.506 14:11:53 -- nvme/functions.sh@11 -- # nvmes=() 00:10:55.506 14:11:53 -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:55.506 14:11:53 -- nvme/functions.sh@12 -- # bdfs=() 00:10:55.506 14:11:53 -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:55.506 14:11:53 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:55.506 14:11:53 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:55.506 14:11:53 -- nvme/functions.sh@14 -- # nvme_name= 00:10:55.506 14:11:53 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:55.506 14:11:53 -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:55.766 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:56.027 Waiting for block devices as requested 00:10:56.027 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:10:56.027 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:10:56.286 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:10:56.286 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:01.577 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:01.577 14:11:59 -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:11:01.577 14:11:59 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:01.578 14:11:59 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:01.578 14:11:59 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:01.578 14:11:59 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:11:01.578 14:11:59 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:11:01.578 14:11:59 -- scripts/common.sh@15 -- # local i 00:11:01.578 14:11:59 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:11:01.578 14:11:59 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:01.578 14:11:59 -- scripts/common.sh@24 -- # return 0 00:11:01.578 14:11:59 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:01.578 14:11:59 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:01.578 14:11:59 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:01.578 14:11:59 -- nvme/functions.sh@18 -- # shift 00:11:01.578 14:11:59 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.578 14:11:59 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:01.578 14:11:59 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.578 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.578 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.578 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.578 14:11:59 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.578 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.578 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.578 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.578 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.578 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.578 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.578 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.578 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.578 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.578 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.578 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.578 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.578 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.578 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.578 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.578 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.578 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.578 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.578 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.578 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.578 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.578 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.578 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.578 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.578 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:01.578 14:11:59 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:01.578 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.579 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.579 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.579 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.579 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.579 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.579 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.579 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.579 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.579 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.579 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.579 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.579 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.579 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.579 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.579 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.579 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.579 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.579 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.579 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.579 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.579 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.579 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.579 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.579 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.579 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.579 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.579 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.579 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.579 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.579 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.579 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.579 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.579 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.579 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:01.579 14:11:59 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.579 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.579 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.580 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.580 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.580 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.580 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.580 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.580 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.580 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.580 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.580 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.580 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.580 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.580 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.580 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.580 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.580 14:11:59 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.580 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.580 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.580 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.580 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.580 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.580 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.580 14:11:59 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.580 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.580 14:11:59 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.580 14:11:59 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:01.580 14:11:59 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:01.580 14:11:59 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:01.580 14:11:59 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:11:01.580 14:11:59 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:01.580 14:11:59 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:01.580 14:11:59 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:01.580 14:11:59 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:11:01.580 14:11:59 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:11:01.580 14:11:59 -- scripts/common.sh@15 -- # local i 00:11:01.580 14:11:59 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:11:01.580 14:11:59 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:01.580 14:11:59 -- scripts/common.sh@24 -- # return 0 00:11:01.580 14:11:59 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:01.580 14:11:59 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:01.580 14:11:59 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:01.580 14:11:59 -- nvme/functions.sh@18 -- # shift 00:11:01.580 14:11:59 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.580 14:11:59 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:01.580 14:11:59 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.580 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.580 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.580 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:11:01.580 14:11:59 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.580 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.580 14:11:59 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.581 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.581 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.581 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.581 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.581 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.581 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.581 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.581 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.581 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.581 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.581 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.581 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.581 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.581 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.581 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.581 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.581 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.581 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.581 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.581 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.581 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.581 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.581 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.581 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.581 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.581 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.581 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.581 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.581 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.581 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.581 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.581 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.581 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.581 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.581 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.581 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.582 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.582 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.582 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.582 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.582 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.582 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.582 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.582 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.582 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.582 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.582 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.582 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.582 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.582 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.582 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.582 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.582 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.582 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.582 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.582 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.582 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.582 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.582 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.582 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.582 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.582 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.582 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.582 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.582 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.582 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:01.582 14:11:59 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.583 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.583 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.583 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.583 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.583 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.583 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.583 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.583 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.583 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.583 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.583 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.583 14:11:59 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.583 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.583 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.583 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.583 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.583 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.583 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.583 14:11:59 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.583 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.583 14:11:59 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.583 14:11:59 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:01.583 14:11:59 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:01.583 14:11:59 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:01.583 14:11:59 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:01.583 14:11:59 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:01.583 14:11:59 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:01.583 14:11:59 -- nvme/functions.sh@18 -- # shift 00:11:01.583 14:11:59 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.583 14:11:59 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:01.583 14:11:59 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.583 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.583 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.583 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.583 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.583 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.583 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.583 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.583 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:01.583 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:01.583 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.584 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.584 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.584 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.584 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.584 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.584 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.584 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.584 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.584 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.584 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.584 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.584 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.584 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.584 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.584 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.584 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.584 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.584 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.584 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.584 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.584 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.584 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.584 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.584 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.584 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.584 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.584 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.584 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.584 14:11:59 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.584 14:11:59 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.584 14:11:59 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.584 14:11:59 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.584 14:11:59 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:01.584 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.584 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.585 14:11:59 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.585 14:11:59 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.585 14:11:59 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.585 14:11:59 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:01.585 14:11:59 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:01.585 14:11:59 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:11:01.585 14:11:59 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:11:01.585 14:11:59 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:11:01.585 14:11:59 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:11:01.585 14:11:59 -- nvme/functions.sh@18 -- # shift 00:11:01.585 14:11:59 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.585 14:11:59 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:11:01.585 14:11:59 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.585 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.585 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.585 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.585 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.585 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.585 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.585 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.585 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.585 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.585 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.585 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.585 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.585 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.585 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.585 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.585 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.585 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.585 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.585 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.585 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.585 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.585 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.585 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.585 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.585 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.585 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.585 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:11:01.585 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.585 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.585 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.586 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.586 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.586 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.586 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.586 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.586 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.586 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.586 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.586 14:11:59 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.586 14:11:59 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.586 14:11:59 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.586 14:11:59 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.586 14:11:59 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.586 14:11:59 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.586 14:11:59 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.586 14:11:59 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.586 14:11:59 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:11:01.586 14:11:59 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:01.586 14:11:59 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:11:01.586 14:11:59 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:11:01.586 14:11:59 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:11:01.586 14:11:59 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:11:01.586 14:11:59 -- nvme/functions.sh@18 -- # shift 00:11:01.586 14:11:59 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.586 14:11:59 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:11:01.586 14:11:59 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.586 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.586 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.586 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.586 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.586 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.586 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.586 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.586 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.586 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.586 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.586 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.586 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.586 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.586 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:11:01.586 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.587 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.587 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.587 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.587 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.587 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.587 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.587 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.587 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.587 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.587 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.587 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.587 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.587 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.587 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.587 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.587 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.587 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.587 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.587 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.587 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.587 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.587 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.587 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.587 14:11:59 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.587 14:11:59 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.587 14:11:59 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.587 14:11:59 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.587 14:11:59 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:01.587 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.587 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.588 14:11:59 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.588 14:11:59 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.588 14:11:59 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.588 14:11:59 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:11:01.588 14:11:59 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:01.588 14:11:59 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:01.588 14:11:59 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:11:01.588 14:11:59 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:01.588 14:11:59 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:01.588 14:11:59 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:01.588 14:11:59 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:11:01.588 14:11:59 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:11:01.588 14:11:59 -- scripts/common.sh@15 -- # local i 00:11:01.588 14:11:59 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:11:01.588 14:11:59 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:01.588 14:11:59 -- scripts/common.sh@24 -- # return 0 00:11:01.588 14:11:59 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:01.588 14:11:59 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:01.588 14:11:59 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:01.588 14:11:59 -- nvme/functions.sh@18 -- # shift 00:11:01.588 14:11:59 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.588 14:11:59 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:01.588 14:11:59 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.588 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.588 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.588 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.588 14:11:59 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.588 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.588 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.588 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.588 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.588 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.588 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.588 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.588 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.588 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.588 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.588 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.588 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.588 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.588 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.588 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.588 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.588 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.588 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.588 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.588 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.588 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.588 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.588 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:01.588 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.589 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.589 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.589 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.589 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.589 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.589 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.589 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.589 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.589 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.589 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.589 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.589 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.589 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.589 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.589 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.589 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.589 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.589 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.589 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.589 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.589 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.589 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.589 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.589 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.589 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.589 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.589 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.589 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.589 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.589 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.589 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.589 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.589 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:01.589 14:11:59 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.589 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.589 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.590 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.590 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.590 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.590 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.590 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.590 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.590 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.590 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.590 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.590 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.590 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.590 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.590 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.590 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.590 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.590 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.590 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.590 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.590 14:11:59 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.590 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.590 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.590 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.590 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.590 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.590 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.590 14:11:59 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.590 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.590 14:11:59 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.590 14:11:59 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:01.590 14:11:59 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:01.590 14:11:59 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:01.590 14:11:59 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:01.590 14:11:59 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:01.590 14:11:59 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:01.590 14:11:59 -- nvme/functions.sh@18 -- # shift 00:11:01.590 14:11:59 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.590 14:11:59 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:01.590 14:11:59 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.590 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.590 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:11:01.590 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.590 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.591 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.591 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.591 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.591 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.591 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.591 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.591 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.591 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.591 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.591 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.591 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.591 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.591 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.591 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.591 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.591 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.591 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.591 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.591 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.591 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.591 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.591 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.591 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.591 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.591 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.591 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.591 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.591 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.591 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.591 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.591 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.591 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.591 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.591 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:01.591 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.591 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.592 14:11:59 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.592 14:11:59 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.592 14:11:59 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.592 14:11:59 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.592 14:11:59 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.592 14:11:59 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.592 14:11:59 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.592 14:11:59 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.592 14:11:59 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:01.592 14:11:59 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:01.592 14:11:59 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:01.592 14:11:59 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:11:01.592 14:11:59 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:01.592 14:11:59 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:01.592 14:11:59 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:01.592 14:11:59 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:11:01.592 14:11:59 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:11:01.592 14:11:59 -- scripts/common.sh@15 -- # local i 00:11:01.592 14:11:59 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:11:01.592 14:11:59 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:01.592 14:11:59 -- scripts/common.sh@24 -- # return 0 00:11:01.592 14:11:59 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:01.592 14:11:59 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:01.592 14:11:59 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:01.592 14:11:59 -- nvme/functions.sh@18 -- # shift 00:11:01.592 14:11:59 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.592 14:11:59 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:01.592 14:11:59 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.592 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.592 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.592 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.592 14:11:59 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.592 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.592 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.592 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.592 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.592 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.592 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.592 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.592 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.592 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.592 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.592 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.592 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:01.592 14:11:59 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.592 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.593 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.593 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.593 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.593 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.593 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.593 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.593 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.593 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.593 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.593 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.593 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.593 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.593 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.593 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.593 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.593 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.593 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.593 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.593 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.593 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.593 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.593 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.593 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.593 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.593 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.593 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.593 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.593 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.593 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.593 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.593 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.593 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.593 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.593 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:01.593 14:11:59 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.593 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.593 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.594 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.594 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.594 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.594 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.594 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.594 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.594 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.594 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.594 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.594 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.594 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.594 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.594 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.594 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.594 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.594 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.594 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.594 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.594 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.594 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.594 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.594 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.594 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.594 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.594 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.594 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.594 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.594 14:11:59 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.594 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.594 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.594 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.594 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.594 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.594 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.594 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.594 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:01.595 14:11:59 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:01.595 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.595 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.595 14:11:59 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:01.595 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:01.595 14:11:59 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:01.595 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.595 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.595 14:11:59 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:01.595 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:01.595 14:11:59 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:01.595 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.595 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.595 14:11:59 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:01.595 14:11:59 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:01.595 14:11:59 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:01.595 14:11:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.595 14:11:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.595 14:11:59 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:01.595 14:11:59 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:01.595 14:11:59 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:11:01.595 14:12:00 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:11:01.595 14:12:00 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:11:01.595 14:12:00 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:11:01.595 14:12:00 -- nvme/functions.sh@18 -- # shift 00:11:01.595 14:12:00 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.595 14:12:00 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:11:01.595 14:12:00 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.595 14:12:00 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.595 14:12:00 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.595 14:12:00 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.595 14:12:00 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.595 14:12:00 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.595 14:12:00 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.595 14:12:00 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.595 14:12:00 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.595 14:12:00 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.595 14:12:00 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.595 14:12:00 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.595 14:12:00 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.595 14:12:00 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.595 14:12:00 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.595 14:12:00 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.595 14:12:00 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.595 14:12:00 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.595 14:12:00 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.595 14:12:00 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.595 14:12:00 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.595 14:12:00 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.595 14:12:00 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.595 14:12:00 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.595 14:12:00 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.595 14:12:00 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.595 14:12:00 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.595 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.595 14:12:00 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:11:01.595 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:11:01.596 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.596 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.596 14:12:00 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:01.596 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:11:01.596 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:11:01.596 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.596 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.596 14:12:00 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:01.596 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:11:01.596 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:11:01.596 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.596 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.596 14:12:00 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.596 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:11:01.596 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:11:01.596 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.596 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.596 14:12:00 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.596 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:11:01.596 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:11:01.596 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.596 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.596 14:12:00 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.596 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:11:01.596 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:11:01.596 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.596 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.596 14:12:00 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.596 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:11:01.596 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:11:01.596 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.596 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.596 14:12:00 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.596 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:11:01.596 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:11:01.596 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.596 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.596 14:12:00 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:01.596 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:11:01.596 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:11:01.596 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.596 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.596 14:12:00 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:01.596 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:11:01.596 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:11:01.596 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.596 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.596 14:12:00 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:01.596 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:01.596 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:01.596 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.596 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.596 14:12:00 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:01.596 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:01.596 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:01.596 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.596 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.596 14:12:00 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:01.596 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:01.596 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:01.596 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.596 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.596 14:12:00 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:01.596 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:01.596 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:01.596 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.596 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.596 14:12:00 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:01.596 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:01.596 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:01.596 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.596 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.596 14:12:00 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:01.596 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:01.596 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:01.596 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.596 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.596 14:12:00 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:01.596 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:01.596 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:01.596 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.596 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.596 14:12:00 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:01.596 14:12:00 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:01.596 14:12:00 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:01.596 14:12:00 -- nvme/functions.sh@21 -- # IFS=: 00:11:01.596 14:12:00 -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.596 14:12:00 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:11:01.596 14:12:00 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:01.596 14:12:00 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:01.596 14:12:00 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:11:01.596 14:12:00 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:01.596 14:12:00 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:01.596 14:12:00 -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:11:01.596 14:12:00 -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:11:01.596 14:12:00 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:01.596 14:12:00 -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:11:01.596 14:12:00 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:01.596 14:12:00 -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:11:01.596 14:12:00 -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:11:01.596 14:12:00 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:01.596 14:12:00 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:01.596 14:12:00 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:11:01.596 14:12:00 -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:11:01.596 14:12:00 -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:11:01.596 14:12:00 -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:11:01.596 14:12:00 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:11:01.596 14:12:00 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:11:01.596 14:12:00 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:01.596 14:12:00 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:01.596 14:12:00 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:01.596 14:12:00 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:01.596 14:12:00 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:01.596 14:12:00 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:01.596 14:12:00 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:01.596 14:12:00 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:11:01.596 14:12:00 -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:11:01.596 14:12:00 -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:11:01.596 14:12:00 -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:11:01.596 14:12:00 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:11:01.596 14:12:00 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:11:01.596 14:12:00 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:01.596 14:12:00 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:01.596 14:12:00 -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:11:01.596 14:12:00 -- nvme/functions.sh@76 -- # echo 0x88010 00:11:01.596 14:12:00 -- nvme/functions.sh@176 -- # ctratt=0x88010 00:11:01.596 14:12:00 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:01.596 14:12:00 -- nvme/functions.sh@197 -- # echo nvme0 00:11:01.596 14:12:00 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:01.596 14:12:00 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:11:01.596 14:12:00 -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:11:01.596 14:12:00 -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:11:01.596 14:12:00 -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:11:01.596 14:12:00 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:11:01.596 14:12:00 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:11:01.596 14:12:00 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:01.596 14:12:00 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:01.596 14:12:00 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:01.596 14:12:00 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:01.596 14:12:00 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:01.596 14:12:00 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:01.596 14:12:00 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:01.596 14:12:00 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:11:01.596 14:12:00 -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:11:01.596 14:12:00 -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:11:01.596 14:12:00 -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:11:01.596 14:12:00 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:11:01.596 14:12:00 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:11:01.596 14:12:00 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:01.596 14:12:00 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:01.596 14:12:00 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:01.596 14:12:00 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:01.596 14:12:00 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:01.596 14:12:00 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:01.596 14:12:00 -- nvme/functions.sh@204 -- # trap - ERR 00:11:01.596 14:12:00 -- nvme/functions.sh@204 -- # print_backtrace 00:11:01.596 14:12:00 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:11:01.597 14:12:00 -- common/autotest_common.sh@1142 -- # return 0 00:11:01.597 14:12:00 -- nvme/functions.sh@204 -- # trap - ERR 00:11:01.597 14:12:00 -- nvme/functions.sh@204 -- # print_backtrace 00:11:01.597 14:12:00 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:11:01.597 14:12:00 -- common/autotest_common.sh@1142 -- # return 0 00:11:01.597 14:12:00 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:11:01.597 14:12:00 -- nvme/functions.sh@206 -- # echo nvme0 00:11:01.597 14:12:00 -- nvme/functions.sh@207 -- # return 0 00:11:01.597 14:12:00 -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme0 00:11:01.597 14:12:00 -- nvme/nvme_fdp.sh@13 -- # bdf=0000:00:09.0 00:11:01.597 14:12:00 -- nvme/nvme_fdp.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:02.538 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:02.538 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:11:02.538 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:11:02.798 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:11:02.798 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:11:02.798 14:12:01 -- nvme/nvme_fdp.sh@17 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:11:02.798 14:12:01 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:02.798 14:12:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:02.798 14:12:01 -- common/autotest_common.sh@10 -- # set +x 00:11:02.798 ************************************ 00:11:02.798 START TEST nvme_flexible_data_placement 00:11:02.798 ************************************ 00:11:02.798 14:12:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:11:03.059 Initializing NVMe Controllers 00:11:03.059 Attaching to 0000:00:09.0 00:11:03.059 Controller supports FDP Attached to 0000:00:09.0 00:11:03.059 Namespace ID: 1 Endurance Group ID: 1 00:11:03.059 Initialization complete. 00:11:03.059 00:11:03.059 ================================== 00:11:03.059 == FDP tests for Namespace: #01 == 00:11:03.059 ================================== 00:11:03.059 00:11:03.059 Get Feature: FDP: 00:11:03.059 ================= 00:11:03.059 Enabled: Yes 00:11:03.059 FDP configuration Index: 0 00:11:03.059 00:11:03.059 FDP configurations log page 00:11:03.059 =========================== 00:11:03.059 Number of FDP configurations: 1 00:11:03.059 Version: 0 00:11:03.059 Size: 112 00:11:03.059 FDP Configuration Descriptor: 0 00:11:03.059 Descriptor Size: 96 00:11:03.059 Reclaim Group Identifier format: 2 00:11:03.059 FDP Volatile Write Cache: Not Present 00:11:03.059 FDP Configuration: Valid 00:11:03.059 Vendor Specific Size: 0 00:11:03.059 Number of Reclaim Groups: 2 00:11:03.059 Number of Recalim Unit Handles: 8 00:11:03.059 Max Placement Identifiers: 128 00:11:03.059 Number of Namespaces Suppprted: 256 00:11:03.059 Reclaim unit Nominal Size: 6000000 bytes 00:11:03.059 Estimated Reclaim Unit Time Limit: Not Reported 00:11:03.059 RUH Desc #000: RUH Type: Initially Isolated 00:11:03.059 RUH Desc #001: RUH Type: Initially Isolated 00:11:03.059 RUH Desc #002: RUH Type: Initially Isolated 00:11:03.059 RUH Desc #003: RUH Type: Initially Isolated 00:11:03.059 RUH Desc #004: RUH Type: Initially Isolated 00:11:03.059 RUH Desc #005: RUH Type: Initially Isolated 00:11:03.059 RUH Desc #006: RUH Type: Initially Isolated 00:11:03.059 RUH Desc #007: RUH Type: Initially Isolated 00:11:03.059 00:11:03.059 FDP reclaim unit handle usage log page 00:11:03.059 ====================================== 00:11:03.059 Number of Reclaim Unit Handles: 8 00:11:03.059 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:03.059 RUH Usage Desc #001: RUH Attributes: Unused 00:11:03.059 RUH Usage Desc #002: RUH Attributes: Unused 00:11:03.059 RUH Usage Desc #003: RUH Attributes: Unused 00:11:03.059 RUH Usage Desc #004: RUH Attributes: Unused 00:11:03.059 RUH Usage Desc #005: RUH Attributes: Unused 00:11:03.059 RUH Usage Desc #006: RUH Attributes: Unused 00:11:03.059 RUH Usage Desc #007: RUH Attributes: Unused 00:11:03.059 00:11:03.059 FDP statistics log page 00:11:03.059 ======================= 00:11:03.059 Host bytes with metadata written: 921976832 00:11:03.059 Media bytes with metadata written: 922198016 00:11:03.059 Media bytes erased: 0 00:11:03.059 00:11:03.059 FDP Reclaim unit handle status 00:11:03.059 ============================== 00:11:03.059 Number of RUHS descriptors: 2 00:11:03.059 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000050bc 00:11:03.059 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:03.059 00:11:03.059 FDP write on placement id: 0 success 00:11:03.059 00:11:03.059 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:03.059 00:11:03.059 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:03.059 00:11:03.059 Get Feature: FDP Events for Placement handle: #0 00:11:03.059 ======================== 00:11:03.059 Number of FDP Events: 6 00:11:03.059 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:03.059 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:03.059 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:03.059 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:03.059 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:03.059 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:03.059 00:11:03.059 FDP events log page 00:11:03.059 =================== 00:11:03.059 Number of FDP events: 1 00:11:03.059 FDP Event #0: 00:11:03.059 Event Type: RU Not Written to Capacity 00:11:03.059 Placement Identifier: Valid 00:11:03.059 NSID: Valid 00:11:03.059 Location: Valid 00:11:03.059 Placement Identifier: 0 00:11:03.059 Event Timestamp: b 00:11:03.059 Namespace Identifier: 1 00:11:03.059 Reclaim Group Identifier: 0 00:11:03.059 Reclaim Unit Handle Identifier: 0 00:11:03.059 00:11:03.059 FDP test passed 00:11:03.059 00:11:03.059 real 0m0.243s 00:11:03.059 user 0m0.067s 00:11:03.059 sys 0m0.075s 00:11:03.059 14:12:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:03.059 ************************************ 00:11:03.059 END TEST nvme_flexible_data_placement 00:11:03.059 ************************************ 00:11:03.059 14:12:01 -- common/autotest_common.sh@10 -- # set +x 00:11:03.059 00:11:03.059 real 0m7.878s 00:11:03.059 user 0m1.048s 00:11:03.059 sys 0m1.653s 00:11:03.059 14:12:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:03.059 14:12:01 -- common/autotest_common.sh@10 -- # set +x 00:11:03.059 ************************************ 00:11:03.059 END TEST nvme_fdp 00:11:03.059 ************************************ 00:11:03.059 14:12:01 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:11:03.059 14:12:01 -- spdk/autotest.sh@233 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:03.059 14:12:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:03.059 14:12:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:03.059 14:12:01 -- common/autotest_common.sh@10 -- # set +x 00:11:03.319 ************************************ 00:11:03.319 START TEST nvme_rpc 00:11:03.319 ************************************ 00:11:03.319 14:12:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:03.319 * Looking for test storage... 00:11:03.319 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:03.319 14:12:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:03.319 14:12:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:03.319 14:12:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:03.319 14:12:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:03.319 14:12:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:03.319 14:12:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:03.319 14:12:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:03.319 14:12:01 -- scripts/common.sh@335 -- # IFS=.-: 00:11:03.319 14:12:01 -- scripts/common.sh@335 -- # read -ra ver1 00:11:03.319 14:12:01 -- scripts/common.sh@336 -- # IFS=.-: 00:11:03.319 14:12:01 -- scripts/common.sh@336 -- # read -ra ver2 00:11:03.319 14:12:01 -- scripts/common.sh@337 -- # local 'op=<' 00:11:03.319 14:12:01 -- scripts/common.sh@339 -- # ver1_l=2 00:11:03.319 14:12:01 -- scripts/common.sh@340 -- # ver2_l=1 00:11:03.319 14:12:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:03.319 14:12:01 -- scripts/common.sh@343 -- # case "$op" in 00:11:03.319 14:12:01 -- scripts/common.sh@344 -- # : 1 00:11:03.319 14:12:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:03.319 14:12:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:03.319 14:12:01 -- scripts/common.sh@364 -- # decimal 1 00:11:03.319 14:12:01 -- scripts/common.sh@352 -- # local d=1 00:11:03.319 14:12:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:03.319 14:12:01 -- scripts/common.sh@354 -- # echo 1 00:11:03.319 14:12:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:03.319 14:12:01 -- scripts/common.sh@365 -- # decimal 2 00:11:03.319 14:12:01 -- scripts/common.sh@352 -- # local d=2 00:11:03.319 14:12:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:03.319 14:12:01 -- scripts/common.sh@354 -- # echo 2 00:11:03.319 14:12:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:03.319 14:12:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:03.319 14:12:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:03.319 14:12:01 -- scripts/common.sh@367 -- # return 0 00:11:03.319 14:12:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:03.319 14:12:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:03.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.319 --rc genhtml_branch_coverage=1 00:11:03.319 --rc genhtml_function_coverage=1 00:11:03.319 --rc genhtml_legend=1 00:11:03.319 --rc geninfo_all_blocks=1 00:11:03.319 --rc geninfo_unexecuted_blocks=1 00:11:03.319 00:11:03.319 ' 00:11:03.319 14:12:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:03.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.319 --rc genhtml_branch_coverage=1 00:11:03.319 --rc genhtml_function_coverage=1 00:11:03.319 --rc genhtml_legend=1 00:11:03.319 --rc geninfo_all_blocks=1 00:11:03.319 --rc geninfo_unexecuted_blocks=1 00:11:03.319 00:11:03.319 ' 00:11:03.319 14:12:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:03.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.319 --rc genhtml_branch_coverage=1 00:11:03.319 --rc genhtml_function_coverage=1 00:11:03.319 --rc genhtml_legend=1 00:11:03.319 --rc geninfo_all_blocks=1 00:11:03.319 --rc geninfo_unexecuted_blocks=1 00:11:03.319 00:11:03.319 ' 00:11:03.319 14:12:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:03.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.319 --rc genhtml_branch_coverage=1 00:11:03.319 --rc genhtml_function_coverage=1 00:11:03.319 --rc genhtml_legend=1 00:11:03.319 --rc geninfo_all_blocks=1 00:11:03.319 --rc geninfo_unexecuted_blocks=1 00:11:03.319 00:11:03.319 ' 00:11:03.319 14:12:01 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:03.319 14:12:01 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:03.319 14:12:01 -- common/autotest_common.sh@1519 -- # bdfs=() 00:11:03.319 14:12:01 -- common/autotest_common.sh@1519 -- # local bdfs 00:11:03.319 14:12:01 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:11:03.319 14:12:01 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:11:03.319 14:12:01 -- common/autotest_common.sh@1508 -- # bdfs=() 00:11:03.319 14:12:01 -- common/autotest_common.sh@1508 -- # local bdfs 00:11:03.319 14:12:01 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:03.319 14:12:01 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:03.319 14:12:01 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:11:03.319 14:12:01 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:11:03.319 14:12:01 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:11:03.319 14:12:01 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:11:03.319 14:12:01 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:11:03.319 14:12:01 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=66604 00:11:03.319 14:12:01 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:03.319 14:12:01 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 66604 00:11:03.319 14:12:01 -- common/autotest_common.sh@829 -- # '[' -z 66604 ']' 00:11:03.319 14:12:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.319 14:12:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:03.319 14:12:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.319 14:12:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:03.319 14:12:01 -- common/autotest_common.sh@10 -- # set +x 00:11:03.319 14:12:01 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:03.580 [2024-11-19 14:12:01.937596] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:03.580 [2024-11-19 14:12:01.937720] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66604 ] 00:11:03.580 [2024-11-19 14:12:02.092995] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:03.841 [2024-11-19 14:12:02.372384] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:03.841 [2024-11-19 14:12:02.372943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.841 [2024-11-19 14:12:02.372984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.223 14:12:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:05.223 14:12:03 -- common/autotest_common.sh@862 -- # return 0 00:11:05.223 14:12:03 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:11:05.223 Nvme0n1 00:11:05.223 14:12:03 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:05.223 14:12:03 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:05.484 request: 00:11:05.484 { 00:11:05.484 "filename": "non_existing_file", 00:11:05.484 "bdev_name": "Nvme0n1", 00:11:05.484 "method": "bdev_nvme_apply_firmware", 00:11:05.484 "req_id": 1 00:11:05.484 } 00:11:05.484 Got JSON-RPC error response 00:11:05.484 response: 00:11:05.484 { 00:11:05.484 "code": -32603, 00:11:05.484 "message": "open file failed." 00:11:05.484 } 00:11:05.484 14:12:03 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:05.484 14:12:03 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:05.484 14:12:03 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:05.745 14:12:04 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:05.745 14:12:04 -- nvme/nvme_rpc.sh@40 -- # killprocess 66604 00:11:05.745 14:12:04 -- common/autotest_common.sh@936 -- # '[' -z 66604 ']' 00:11:05.745 14:12:04 -- common/autotest_common.sh@940 -- # kill -0 66604 00:11:05.745 14:12:04 -- common/autotest_common.sh@941 -- # uname 00:11:05.745 14:12:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:05.745 14:12:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66604 00:11:05.745 14:12:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:05.745 killing process with pid 66604 00:11:05.745 14:12:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:05.745 14:12:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66604' 00:11:05.745 14:12:04 -- common/autotest_common.sh@955 -- # kill 66604 00:11:05.745 14:12:04 -- common/autotest_common.sh@960 -- # wait 66604 00:11:07.662 00:11:07.662 real 0m4.295s 00:11:07.662 user 0m7.771s 00:11:07.662 sys 0m0.758s 00:11:07.662 ************************************ 00:11:07.662 END TEST nvme_rpc 00:11:07.662 ************************************ 00:11:07.662 14:12:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:07.662 14:12:05 -- common/autotest_common.sh@10 -- # set +x 00:11:07.662 14:12:05 -- spdk/autotest.sh@234 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:07.662 14:12:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:07.662 14:12:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:07.662 14:12:05 -- common/autotest_common.sh@10 -- # set +x 00:11:07.662 ************************************ 00:11:07.662 START TEST nvme_rpc_timeouts 00:11:07.662 ************************************ 00:11:07.662 14:12:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:07.662 * Looking for test storage... 00:11:07.662 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:07.662 14:12:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:07.662 14:12:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:07.662 14:12:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:07.662 14:12:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:07.662 14:12:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:07.662 14:12:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:07.662 14:12:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:07.662 14:12:06 -- scripts/common.sh@335 -- # IFS=.-: 00:11:07.662 14:12:06 -- scripts/common.sh@335 -- # read -ra ver1 00:11:07.662 14:12:06 -- scripts/common.sh@336 -- # IFS=.-: 00:11:07.662 14:12:06 -- scripts/common.sh@336 -- # read -ra ver2 00:11:07.662 14:12:06 -- scripts/common.sh@337 -- # local 'op=<' 00:11:07.662 14:12:06 -- scripts/common.sh@339 -- # ver1_l=2 00:11:07.662 14:12:06 -- scripts/common.sh@340 -- # ver2_l=1 00:11:07.662 14:12:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:07.662 14:12:06 -- scripts/common.sh@343 -- # case "$op" in 00:11:07.662 14:12:06 -- scripts/common.sh@344 -- # : 1 00:11:07.662 14:12:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:07.662 14:12:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:07.662 14:12:06 -- scripts/common.sh@364 -- # decimal 1 00:11:07.662 14:12:06 -- scripts/common.sh@352 -- # local d=1 00:11:07.662 14:12:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:07.662 14:12:06 -- scripts/common.sh@354 -- # echo 1 00:11:07.662 14:12:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:07.662 14:12:06 -- scripts/common.sh@365 -- # decimal 2 00:11:07.662 14:12:06 -- scripts/common.sh@352 -- # local d=2 00:11:07.662 14:12:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:07.662 14:12:06 -- scripts/common.sh@354 -- # echo 2 00:11:07.662 14:12:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:07.662 14:12:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:07.662 14:12:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:07.662 14:12:06 -- scripts/common.sh@367 -- # return 0 00:11:07.662 14:12:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:07.662 14:12:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:07.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.662 --rc genhtml_branch_coverage=1 00:11:07.662 --rc genhtml_function_coverage=1 00:11:07.662 --rc genhtml_legend=1 00:11:07.662 --rc geninfo_all_blocks=1 00:11:07.662 --rc geninfo_unexecuted_blocks=1 00:11:07.662 00:11:07.662 ' 00:11:07.662 14:12:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:07.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.662 --rc genhtml_branch_coverage=1 00:11:07.662 --rc genhtml_function_coverage=1 00:11:07.662 --rc genhtml_legend=1 00:11:07.662 --rc geninfo_all_blocks=1 00:11:07.662 --rc geninfo_unexecuted_blocks=1 00:11:07.662 00:11:07.662 ' 00:11:07.662 14:12:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:07.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.662 --rc genhtml_branch_coverage=1 00:11:07.662 --rc genhtml_function_coverage=1 00:11:07.662 --rc genhtml_legend=1 00:11:07.662 --rc geninfo_all_blocks=1 00:11:07.662 --rc geninfo_unexecuted_blocks=1 00:11:07.662 00:11:07.662 ' 00:11:07.662 14:12:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:07.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.662 --rc genhtml_branch_coverage=1 00:11:07.662 --rc genhtml_function_coverage=1 00:11:07.662 --rc genhtml_legend=1 00:11:07.662 --rc geninfo_all_blocks=1 00:11:07.662 --rc geninfo_unexecuted_blocks=1 00:11:07.662 00:11:07.662 ' 00:11:07.662 14:12:06 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:07.662 14:12:06 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_66684 00:11:07.662 14:12:06 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_66684 00:11:07.662 14:12:06 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=66715 00:11:07.662 14:12:06 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:07.662 14:12:06 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 66715 00:11:07.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.662 14:12:06 -- common/autotest_common.sh@829 -- # '[' -z 66715 ']' 00:11:07.662 14:12:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.662 14:12:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:07.662 14:12:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.662 14:12:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:07.662 14:12:06 -- common/autotest_common.sh@10 -- # set +x 00:11:07.662 14:12:06 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:07.922 [2024-11-19 14:12:06.237392] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:07.922 [2024-11-19 14:12:06.237562] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66715 ] 00:11:07.922 [2024-11-19 14:12:06.393569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:08.183 [2024-11-19 14:12:06.667099] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:08.183 [2024-11-19 14:12:06.667708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.183 [2024-11-19 14:12:06.667786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.603 14:12:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:09.603 14:12:07 -- common/autotest_common.sh@862 -- # return 0 00:11:09.603 Checking default timeout settings: 00:11:09.603 14:12:07 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:09.603 14:12:07 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:09.603 Making settings changes with rpc: 00:11:09.603 14:12:08 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:09.603 14:12:08 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:09.880 Check default vs. modified settings: 00:11:09.880 14:12:08 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:09.880 14:12:08 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:10.139 14:12:08 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:10.139 14:12:08 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:10.139 14:12:08 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_66684 00:11:10.139 14:12:08 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:10.139 14:12:08 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:10.139 14:12:08 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:10.139 14:12:08 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:10.139 14:12:08 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_66684 00:11:10.139 14:12:08 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:10.139 14:12:08 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:10.139 14:12:08 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:10.139 Setting action_on_timeout is changed as expected. 00:11:10.139 14:12:08 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:10.139 14:12:08 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:10.139 14:12:08 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_66684 00:11:10.139 14:12:08 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:10.139 14:12:08 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:10.139 14:12:08 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:10.139 14:12:08 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_66684 00:11:10.139 14:12:08 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:10.139 14:12:08 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:10.139 14:12:08 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:10.139 14:12:08 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:10.139 Setting timeout_us is changed as expected. 00:11:10.139 14:12:08 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:10.139 14:12:08 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:10.139 14:12:08 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:10.139 14:12:08 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:10.139 14:12:08 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_66684 00:11:10.139 14:12:08 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:10.139 14:12:08 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:10.139 14:12:08 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_66684 00:11:10.139 14:12:08 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:10.139 14:12:08 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:10.139 14:12:08 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:10.139 Setting timeout_admin_us is changed as expected. 00:11:10.139 14:12:08 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:10.139 14:12:08 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:10.139 14:12:08 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_66684 /tmp/settings_modified_66684 00:11:10.139 14:12:08 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 66715 00:11:10.139 14:12:08 -- common/autotest_common.sh@936 -- # '[' -z 66715 ']' 00:11:10.139 14:12:08 -- common/autotest_common.sh@940 -- # kill -0 66715 00:11:10.139 14:12:08 -- common/autotest_common.sh@941 -- # uname 00:11:10.139 14:12:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:10.139 14:12:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66715 00:11:10.139 14:12:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:10.139 14:12:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:10.139 killing process with pid 66715 00:11:10.140 14:12:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66715' 00:11:10.140 14:12:08 -- common/autotest_common.sh@955 -- # kill 66715 00:11:10.140 14:12:08 -- common/autotest_common.sh@960 -- # wait 66715 00:11:11.517 RPC TIMEOUT SETTING TEST PASSED. 00:11:11.517 14:12:09 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:11.517 00:11:11.517 real 0m3.923s 00:11:11.517 user 0m7.281s 00:11:11.517 sys 0m0.766s 00:11:11.517 ************************************ 00:11:11.517 END TEST nvme_rpc_timeouts 00:11:11.517 ************************************ 00:11:11.517 14:12:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:11.517 14:12:09 -- common/autotest_common.sh@10 -- # set +x 00:11:11.517 14:12:09 -- spdk/autotest.sh@238 -- # '[' 1 -eq 0 ']' 00:11:11.517 14:12:09 -- spdk/autotest.sh@242 -- # [[ 1 -eq 1 ]] 00:11:11.517 14:12:09 -- spdk/autotest.sh@243 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:11.517 14:12:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:11.517 14:12:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:11.517 14:12:09 -- common/autotest_common.sh@10 -- # set +x 00:11:11.517 ************************************ 00:11:11.517 START TEST nvme_xnvme 00:11:11.517 ************************************ 00:11:11.517 14:12:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:11.517 * Looking for test storage... 00:11:11.517 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:11.517 14:12:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:11.517 14:12:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:11.517 14:12:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:11.779 14:12:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:11.779 14:12:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:11.779 14:12:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:11.779 14:12:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:11.779 14:12:10 -- scripts/common.sh@335 -- # IFS=.-: 00:11:11.779 14:12:10 -- scripts/common.sh@335 -- # read -ra ver1 00:11:11.779 14:12:10 -- scripts/common.sh@336 -- # IFS=.-: 00:11:11.779 14:12:10 -- scripts/common.sh@336 -- # read -ra ver2 00:11:11.779 14:12:10 -- scripts/common.sh@337 -- # local 'op=<' 00:11:11.779 14:12:10 -- scripts/common.sh@339 -- # ver1_l=2 00:11:11.779 14:12:10 -- scripts/common.sh@340 -- # ver2_l=1 00:11:11.779 14:12:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:11.779 14:12:10 -- scripts/common.sh@343 -- # case "$op" in 00:11:11.779 14:12:10 -- scripts/common.sh@344 -- # : 1 00:11:11.779 14:12:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:11.779 14:12:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:11.779 14:12:10 -- scripts/common.sh@364 -- # decimal 1 00:11:11.779 14:12:10 -- scripts/common.sh@352 -- # local d=1 00:11:11.779 14:12:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:11.779 14:12:10 -- scripts/common.sh@354 -- # echo 1 00:11:11.779 14:12:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:11.779 14:12:10 -- scripts/common.sh@365 -- # decimal 2 00:11:11.779 14:12:10 -- scripts/common.sh@352 -- # local d=2 00:11:11.779 14:12:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:11.779 14:12:10 -- scripts/common.sh@354 -- # echo 2 00:11:11.779 14:12:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:11.779 14:12:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:11.779 14:12:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:11.779 14:12:10 -- scripts/common.sh@367 -- # return 0 00:11:11.779 14:12:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:11.779 14:12:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:11.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.779 --rc genhtml_branch_coverage=1 00:11:11.779 --rc genhtml_function_coverage=1 00:11:11.779 --rc genhtml_legend=1 00:11:11.779 --rc geninfo_all_blocks=1 00:11:11.779 --rc geninfo_unexecuted_blocks=1 00:11:11.779 00:11:11.779 ' 00:11:11.779 14:12:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:11.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.779 --rc genhtml_branch_coverage=1 00:11:11.779 --rc genhtml_function_coverage=1 00:11:11.780 --rc genhtml_legend=1 00:11:11.780 --rc geninfo_all_blocks=1 00:11:11.780 --rc geninfo_unexecuted_blocks=1 00:11:11.780 00:11:11.780 ' 00:11:11.780 14:12:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:11.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.780 --rc genhtml_branch_coverage=1 00:11:11.780 --rc genhtml_function_coverage=1 00:11:11.780 --rc genhtml_legend=1 00:11:11.780 --rc geninfo_all_blocks=1 00:11:11.780 --rc geninfo_unexecuted_blocks=1 00:11:11.780 00:11:11.780 ' 00:11:11.780 14:12:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:11.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.780 --rc genhtml_branch_coverage=1 00:11:11.780 --rc genhtml_function_coverage=1 00:11:11.780 --rc genhtml_legend=1 00:11:11.780 --rc geninfo_all_blocks=1 00:11:11.780 --rc geninfo_unexecuted_blocks=1 00:11:11.780 00:11:11.780 ' 00:11:11.780 14:12:10 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:11.780 14:12:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:11.780 14:12:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:11.780 14:12:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:11.780 14:12:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.780 14:12:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.780 14:12:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.780 14:12:10 -- paths/export.sh@5 -- # export PATH 00:11:11.780 14:12:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.780 14:12:10 -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:11:11.780 14:12:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:11.780 14:12:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:11.780 14:12:10 -- common/autotest_common.sh@10 -- # set +x 00:11:11.780 ************************************ 00:11:11.780 START TEST xnvme_to_malloc_dd_copy 00:11:11.780 ************************************ 00:11:11.780 14:12:10 -- common/autotest_common.sh@1114 -- # malloc_to_xnvme_copy 00:11:11.780 14:12:10 -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:11:11.780 14:12:10 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:11:11.780 14:12:10 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:11:11.780 14:12:10 -- dd/common.sh@191 -- # return 00:11:11.780 14:12:10 -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:11:11.780 14:12:10 -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:11:11.780 14:12:10 -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:11.780 14:12:10 -- xnvme/xnvme.sh@18 -- # local io 00:11:11.780 14:12:10 -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:11:11.780 14:12:10 -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:11:11.780 14:12:10 -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:11:11.780 14:12:10 -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:11:11.780 14:12:10 -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:11:11.780 14:12:10 -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:11:11.780 14:12:10 -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:11:11.780 14:12:10 -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:11:11.780 14:12:10 -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:11.780 14:12:10 -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:11.780 14:12:10 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:11.780 14:12:10 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:11.780 14:12:10 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:11.780 14:12:10 -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:11.780 14:12:10 -- dd/common.sh@31 -- # xtrace_disable 00:11:11.780 14:12:10 -- common/autotest_common.sh@10 -- # set +x 00:11:11.780 { 00:11:11.780 "subsystems": [ 00:11:11.780 { 00:11:11.780 "subsystem": "bdev", 00:11:11.780 "config": [ 00:11:11.780 { 00:11:11.780 "params": { 00:11:11.780 "block_size": 512, 00:11:11.780 "num_blocks": 2097152, 00:11:11.780 "name": "malloc0" 00:11:11.780 }, 00:11:11.780 "method": "bdev_malloc_create" 00:11:11.780 }, 00:11:11.780 { 00:11:11.780 "params": { 00:11:11.780 "io_mechanism": "libaio", 00:11:11.780 "filename": "/dev/nullb0", 00:11:11.780 "name": "null0" 00:11:11.780 }, 00:11:11.780 "method": "bdev_xnvme_create" 00:11:11.780 }, 00:11:11.780 { 00:11:11.780 "method": "bdev_wait_for_examine" 00:11:11.780 } 00:11:11.780 ] 00:11:11.780 } 00:11:11.780 ] 00:11:11.780 } 00:11:11.780 [2024-11-19 14:12:10.216713] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:11.780 [2024-11-19 14:12:10.216819] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66852 ] 00:11:12.040 [2024-11-19 14:12:10.366925] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.040 [2024-11-19 14:12:10.557137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.941  [2024-11-19T14:12:13.437Z] Copying: 306/1024 [MB] (306 MBps) [2024-11-19T14:12:14.813Z] Copying: 614/1024 [MB] (307 MBps) [2024-11-19T14:12:14.813Z] Copying: 922/1024 [MB] (307 MBps) [2024-11-19T14:12:17.397Z] Copying: 1024/1024 [MB] (average 307 MBps) 00:11:18.835 00:11:18.835 14:12:16 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:18.835 14:12:16 -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:18.835 14:12:16 -- dd/common.sh@31 -- # xtrace_disable 00:11:18.835 14:12:16 -- common/autotest_common.sh@10 -- # set +x 00:11:18.835 { 00:11:18.835 "subsystems": [ 00:11:18.835 { 00:11:18.835 "subsystem": "bdev", 00:11:18.835 "config": [ 00:11:18.835 { 00:11:18.835 "params": { 00:11:18.835 "block_size": 512, 00:11:18.835 "num_blocks": 2097152, 00:11:18.835 "name": "malloc0" 00:11:18.835 }, 00:11:18.836 "method": "bdev_malloc_create" 00:11:18.836 }, 00:11:18.836 { 00:11:18.836 "params": { 00:11:18.836 "io_mechanism": "libaio", 00:11:18.836 "filename": "/dev/nullb0", 00:11:18.836 "name": "null0" 00:11:18.836 }, 00:11:18.836 "method": "bdev_xnvme_create" 00:11:18.836 }, 00:11:18.836 { 00:11:18.836 "method": "bdev_wait_for_examine" 00:11:18.836 } 00:11:18.836 ] 00:11:18.836 } 00:11:18.836 ] 00:11:18.836 } 00:11:18.836 [2024-11-19 14:12:16.941174] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:18.836 [2024-11-19 14:12:16.941288] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66936 ] 00:11:18.836 [2024-11-19 14:12:17.089640] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.836 [2024-11-19 14:12:17.260334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.742  [2024-11-19T14:12:20.240Z] Copying: 309/1024 [MB] (309 MBps) [2024-11-19T14:12:21.175Z] Copying: 618/1024 [MB] (309 MBps) [2024-11-19T14:12:21.434Z] Copying: 928/1024 [MB] (310 MBps) [2024-11-19T14:12:23.966Z] Copying: 1024/1024 [MB] (average 309 MBps) 00:11:25.404 00:11:25.404 14:12:23 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:25.404 14:12:23 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:11:25.404 14:12:23 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:25.404 14:12:23 -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:25.404 14:12:23 -- dd/common.sh@31 -- # xtrace_disable 00:11:25.404 14:12:23 -- common/autotest_common.sh@10 -- # set +x 00:11:25.404 { 00:11:25.405 "subsystems": [ 00:11:25.405 { 00:11:25.405 "subsystem": "bdev", 00:11:25.405 "config": [ 00:11:25.405 { 00:11:25.405 "params": { 00:11:25.405 "block_size": 512, 00:11:25.405 "num_blocks": 2097152, 00:11:25.405 "name": "malloc0" 00:11:25.405 }, 00:11:25.405 "method": "bdev_malloc_create" 00:11:25.405 }, 00:11:25.405 { 00:11:25.405 "params": { 00:11:25.405 "io_mechanism": "io_uring", 00:11:25.405 "filename": "/dev/nullb0", 00:11:25.405 "name": "null0" 00:11:25.405 }, 00:11:25.405 "method": "bdev_xnvme_create" 00:11:25.405 }, 00:11:25.405 { 00:11:25.405 "method": "bdev_wait_for_examine" 00:11:25.405 } 00:11:25.405 ] 00:11:25.405 } 00:11:25.405 ] 00:11:25.405 } 00:11:25.405 [2024-11-19 14:12:23.619687] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:25.405 [2024-11-19 14:12:23.619797] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67018 ] 00:11:25.405 [2024-11-19 14:12:23.769261] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.405 [2024-11-19 14:12:23.929233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.307  [2024-11-19T14:12:26.804Z] Copying: 314/1024 [MB] (314 MBps) [2024-11-19T14:12:28.179Z] Copying: 630/1024 [MB] (315 MBps) [2024-11-19T14:12:28.179Z] Copying: 928/1024 [MB] (298 MBps) [2024-11-19T14:12:30.712Z] Copying: 1024/1024 [MB] (average 310 MBps) 00:11:32.150 00:11:32.150 14:12:30 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:32.150 14:12:30 -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:32.150 14:12:30 -- dd/common.sh@31 -- # xtrace_disable 00:11:32.150 14:12:30 -- common/autotest_common.sh@10 -- # set +x 00:11:32.150 { 00:11:32.150 "subsystems": [ 00:11:32.150 { 00:11:32.150 "subsystem": "bdev", 00:11:32.150 "config": [ 00:11:32.150 { 00:11:32.150 "params": { 00:11:32.150 "block_size": 512, 00:11:32.150 "num_blocks": 2097152, 00:11:32.150 "name": "malloc0" 00:11:32.150 }, 00:11:32.150 "method": "bdev_malloc_create" 00:11:32.150 }, 00:11:32.150 { 00:11:32.150 "params": { 00:11:32.150 "io_mechanism": "io_uring", 00:11:32.150 "filename": "/dev/nullb0", 00:11:32.151 "name": "null0" 00:11:32.151 }, 00:11:32.151 "method": "bdev_xnvme_create" 00:11:32.151 }, 00:11:32.151 { 00:11:32.151 "method": "bdev_wait_for_examine" 00:11:32.151 } 00:11:32.151 ] 00:11:32.151 } 00:11:32.151 ] 00:11:32.151 } 00:11:32.151 [2024-11-19 14:12:30.250365] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:32.151 [2024-11-19 14:12:30.250473] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67094 ] 00:11:32.151 [2024-11-19 14:12:30.397320] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.151 [2024-11-19 14:12:30.563955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.066  [2024-11-19T14:12:33.572Z] Copying: 242/1024 [MB] (242 MBps) [2024-11-19T14:12:34.957Z] Copying: 562/1024 [MB] (319 MBps) [2024-11-19T14:12:35.217Z] Copying: 887/1024 [MB] (325 MBps) [2024-11-19T14:12:37.132Z] Copying: 1024/1024 [MB] (average 299 MBps) 00:11:38.570 00:11:38.570 14:12:36 -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:11:38.570 14:12:36 -- dd/common.sh@195 -- # modprobe -r null_blk 00:11:38.570 00:11:38.570 real 0m26.824s 00:11:38.570 user 0m23.536s 00:11:38.570 sys 0m2.764s 00:11:38.570 14:12:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:38.570 ************************************ 00:11:38.570 END TEST xnvme_to_malloc_dd_copy 00:11:38.570 ************************************ 00:11:38.570 14:12:36 -- common/autotest_common.sh@10 -- # set +x 00:11:38.570 14:12:36 -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:11:38.570 14:12:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:38.570 14:12:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:38.570 14:12:36 -- common/autotest_common.sh@10 -- # set +x 00:11:38.570 ************************************ 00:11:38.570 START TEST xnvme_bdevperf 00:11:38.570 ************************************ 00:11:38.570 14:12:37 -- common/autotest_common.sh@1114 -- # xnvme_bdevperf 00:11:38.570 14:12:37 -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:11:38.570 14:12:37 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:11:38.570 14:12:37 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:11:38.570 14:12:37 -- dd/common.sh@191 -- # return 00:11:38.570 14:12:37 -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:11:38.570 14:12:37 -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:38.570 14:12:37 -- xnvme/xnvme.sh@60 -- # local io 00:11:38.570 14:12:37 -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:11:38.570 14:12:37 -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:11:38.570 14:12:37 -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:11:38.570 14:12:37 -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:11:38.570 14:12:37 -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:11:38.570 14:12:37 -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:38.570 14:12:37 -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:38.570 14:12:37 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:11:38.570 14:12:37 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:38.570 14:12:37 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:11:38.570 14:12:37 -- xnvme/xnvme.sh@74 -- # gen_conf 00:11:38.570 14:12:37 -- dd/common.sh@31 -- # xtrace_disable 00:11:38.570 14:12:37 -- common/autotest_common.sh@10 -- # set +x 00:11:38.570 { 00:11:38.570 "subsystems": [ 00:11:38.570 { 00:11:38.570 "subsystem": "bdev", 00:11:38.570 "config": [ 00:11:38.570 { 00:11:38.570 "params": { 00:11:38.570 "io_mechanism": "libaio", 00:11:38.570 "filename": "/dev/nullb0", 00:11:38.570 "name": "null0" 00:11:38.570 }, 00:11:38.570 "method": "bdev_xnvme_create" 00:11:38.570 }, 00:11:38.570 { 00:11:38.570 "method": "bdev_wait_for_examine" 00:11:38.570 } 00:11:38.570 ] 00:11:38.570 } 00:11:38.570 ] 00:11:38.570 } 00:11:38.570 [2024-11-19 14:12:37.075478] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:38.570 [2024-11-19 14:12:37.075557] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67204 ] 00:11:38.831 [2024-11-19 14:12:37.212292] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.831 [2024-11-19 14:12:37.350510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.092 Running I/O for 5 seconds... 00:11:44.416 00:11:44.416 Latency(us) 00:11:44.416 [2024-11-19T14:12:42.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:44.416 [2024-11-19T14:12:42.978Z] Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:11:44.416 null0 : 5.00 209551.43 818.56 0.00 0.00 303.31 117.37 1581.69 00:11:44.416 [2024-11-19T14:12:42.978Z] =================================================================================================================== 00:11:44.416 [2024-11-19T14:12:42.978Z] Total : 209551.43 818.56 0.00 0.00 303.31 117.37 1581.69 00:11:44.984 14:12:43 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:11:44.984 14:12:43 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:11:44.984 14:12:43 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:11:44.984 14:12:43 -- xnvme/xnvme.sh@74 -- # gen_conf 00:11:44.984 14:12:43 -- dd/common.sh@31 -- # xtrace_disable 00:11:44.984 14:12:43 -- common/autotest_common.sh@10 -- # set +x 00:11:44.984 { 00:11:44.984 "subsystems": [ 00:11:44.984 { 00:11:44.984 "subsystem": "bdev", 00:11:44.984 "config": [ 00:11:44.984 { 00:11:44.984 "params": { 00:11:44.984 "io_mechanism": "io_uring", 00:11:44.984 "filename": "/dev/nullb0", 00:11:44.984 "name": "null0" 00:11:44.984 }, 00:11:44.984 "method": "bdev_xnvme_create" 00:11:44.984 }, 00:11:44.984 { 00:11:44.984 "method": "bdev_wait_for_examine" 00:11:44.984 } 00:11:44.984 ] 00:11:44.984 } 00:11:44.984 ] 00:11:44.984 } 00:11:44.984 [2024-11-19 14:12:43.315928] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:44.984 [2024-11-19 14:12:43.316037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67273 ] 00:11:44.984 [2024-11-19 14:12:43.462392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.243 [2024-11-19 14:12:43.627326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.501 Running I/O for 5 seconds... 00:11:50.769 00:11:50.769 Latency(us) 00:11:50.769 [2024-11-19T14:12:49.331Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:50.769 [2024-11-19T14:12:49.331Z] Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:11:50.769 null0 : 5.00 238070.23 929.96 0.00 0.00 266.63 155.96 327.68 00:11:50.769 [2024-11-19T14:12:49.331Z] =================================================================================================================== 00:11:50.769 [2024-11-19T14:12:49.331Z] Total : 238070.23 929.96 0.00 0.00 266.63 155.96 327.68 00:11:51.028 14:12:49 -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:11:51.028 14:12:49 -- dd/common.sh@195 -- # modprobe -r null_blk 00:11:51.028 ************************************ 00:11:51.028 END TEST xnvme_bdevperf 00:11:51.028 ************************************ 00:11:51.028 00:11:51.028 real 0m12.526s 00:11:51.028 user 0m10.114s 00:11:51.028 sys 0m2.159s 00:11:51.028 14:12:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:51.028 14:12:49 -- common/autotest_common.sh@10 -- # set +x 00:11:51.028 ************************************ 00:11:51.028 END TEST nvme_xnvme 00:11:51.028 ************************************ 00:11:51.028 00:11:51.028 real 0m39.626s 00:11:51.028 user 0m33.764s 00:11:51.028 sys 0m5.051s 00:11:51.028 14:12:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:51.028 14:12:49 -- common/autotest_common.sh@10 -- # set +x 00:11:51.291 14:12:49 -- spdk/autotest.sh@244 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:11:51.291 14:12:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:51.291 14:12:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:51.291 14:12:49 -- common/autotest_common.sh@10 -- # set +x 00:11:51.291 ************************************ 00:11:51.291 START TEST blockdev_xnvme 00:11:51.291 ************************************ 00:11:51.291 14:12:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:11:51.291 * Looking for test storage... 00:11:51.291 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:51.291 14:12:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:51.291 14:12:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:51.291 14:12:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:51.291 14:12:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:51.291 14:12:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:51.291 14:12:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:51.291 14:12:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:51.291 14:12:49 -- scripts/common.sh@335 -- # IFS=.-: 00:11:51.291 14:12:49 -- scripts/common.sh@335 -- # read -ra ver1 00:11:51.291 14:12:49 -- scripts/common.sh@336 -- # IFS=.-: 00:11:51.291 14:12:49 -- scripts/common.sh@336 -- # read -ra ver2 00:11:51.291 14:12:49 -- scripts/common.sh@337 -- # local 'op=<' 00:11:51.291 14:12:49 -- scripts/common.sh@339 -- # ver1_l=2 00:11:51.291 14:12:49 -- scripts/common.sh@340 -- # ver2_l=1 00:11:51.291 14:12:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:51.291 14:12:49 -- scripts/common.sh@343 -- # case "$op" in 00:11:51.291 14:12:49 -- scripts/common.sh@344 -- # : 1 00:11:51.291 14:12:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:51.291 14:12:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:51.291 14:12:49 -- scripts/common.sh@364 -- # decimal 1 00:11:51.291 14:12:49 -- scripts/common.sh@352 -- # local d=1 00:11:51.291 14:12:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:51.291 14:12:49 -- scripts/common.sh@354 -- # echo 1 00:11:51.291 14:12:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:51.291 14:12:49 -- scripts/common.sh@365 -- # decimal 2 00:11:51.291 14:12:49 -- scripts/common.sh@352 -- # local d=2 00:11:51.291 14:12:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:51.291 14:12:49 -- scripts/common.sh@354 -- # echo 2 00:11:51.291 14:12:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:51.291 14:12:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:51.291 14:12:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:51.291 14:12:49 -- scripts/common.sh@367 -- # return 0 00:11:51.291 14:12:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:51.291 14:12:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:51.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.291 --rc genhtml_branch_coverage=1 00:11:51.291 --rc genhtml_function_coverage=1 00:11:51.291 --rc genhtml_legend=1 00:11:51.291 --rc geninfo_all_blocks=1 00:11:51.291 --rc geninfo_unexecuted_blocks=1 00:11:51.291 00:11:51.291 ' 00:11:51.291 14:12:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:51.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.291 --rc genhtml_branch_coverage=1 00:11:51.291 --rc genhtml_function_coverage=1 00:11:51.291 --rc genhtml_legend=1 00:11:51.291 --rc geninfo_all_blocks=1 00:11:51.291 --rc geninfo_unexecuted_blocks=1 00:11:51.291 00:11:51.291 ' 00:11:51.291 14:12:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:51.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.291 --rc genhtml_branch_coverage=1 00:11:51.291 --rc genhtml_function_coverage=1 00:11:51.291 --rc genhtml_legend=1 00:11:51.291 --rc geninfo_all_blocks=1 00:11:51.291 --rc geninfo_unexecuted_blocks=1 00:11:51.291 00:11:51.291 ' 00:11:51.291 14:12:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:51.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.291 --rc genhtml_branch_coverage=1 00:11:51.291 --rc genhtml_function_coverage=1 00:11:51.291 --rc genhtml_legend=1 00:11:51.291 --rc geninfo_all_blocks=1 00:11:51.291 --rc geninfo_unexecuted_blocks=1 00:11:51.291 00:11:51.291 ' 00:11:51.291 14:12:49 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:51.291 14:12:49 -- bdev/nbd_common.sh@6 -- # set -e 00:11:51.291 14:12:49 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:51.291 14:12:49 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:51.291 14:12:49 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:11:51.291 14:12:49 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:11:51.291 14:12:49 -- bdev/blockdev.sh@18 -- # : 00:11:51.291 14:12:49 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:11:51.291 14:12:49 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:11:51.291 14:12:49 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:11:51.291 14:12:49 -- bdev/blockdev.sh@672 -- # uname -s 00:11:51.291 14:12:49 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:11:51.291 14:12:49 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:11:51.291 14:12:49 -- bdev/blockdev.sh@680 -- # test_type=xnvme 00:11:51.291 14:12:49 -- bdev/blockdev.sh@681 -- # crypto_device= 00:11:51.291 14:12:49 -- bdev/blockdev.sh@682 -- # dek= 00:11:51.292 14:12:49 -- bdev/blockdev.sh@683 -- # env_ctx= 00:11:51.292 14:12:49 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:11:51.292 14:12:49 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:11:51.292 14:12:49 -- bdev/blockdev.sh@688 -- # [[ xnvme == bdev ]] 00:11:51.292 14:12:49 -- bdev/blockdev.sh@688 -- # [[ xnvme == crypto_* ]] 00:11:51.292 14:12:49 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:11:51.292 14:12:49 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=67420 00:11:51.292 14:12:49 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:51.292 14:12:49 -- bdev/blockdev.sh@47 -- # waitforlisten 67420 00:11:51.292 14:12:49 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:51.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.292 14:12:49 -- common/autotest_common.sh@829 -- # '[' -z 67420 ']' 00:11:51.292 14:12:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.292 14:12:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:51.292 14:12:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.292 14:12:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:51.292 14:12:49 -- common/autotest_common.sh@10 -- # set +x 00:11:51.553 [2024-11-19 14:12:49.918005] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:51.554 [2024-11-19 14:12:49.918376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67420 ] 00:11:51.554 [2024-11-19 14:12:50.076229] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.815 [2024-11-19 14:12:50.344247] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:51.815 [2024-11-19 14:12:50.344770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.202 14:12:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:53.202 14:12:51 -- common/autotest_common.sh@862 -- # return 0 00:11:53.202 14:12:51 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:11:53.202 14:12:51 -- bdev/blockdev.sh@727 -- # setup_xnvme_conf 00:11:53.202 14:12:51 -- bdev/blockdev.sh@86 -- # local io_mechanism=io_uring 00:11:53.202 14:12:51 -- bdev/blockdev.sh@87 -- # local nvme nvmes 00:11:53.202 14:12:51 -- bdev/blockdev.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:53.461 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:53.461 Waiting for block devices as requested 00:11:53.461 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:11:53.461 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:11:53.719 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:11:53.719 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:59.003 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:59.003 14:12:57 -- bdev/blockdev.sh@90 -- # get_zoned_devs 00:11:59.003 14:12:57 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:11:59.003 14:12:57 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:11:59.003 14:12:57 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:11:59.003 14:12:57 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:59.003 14:12:57 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:11:59.003 14:12:57 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:11:59.003 14:12:57 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:11:59.003 14:12:57 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:59.003 14:12:57 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:59.003 14:12:57 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:11:59.003 14:12:57 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:11:59.003 14:12:57 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:59.003 14:12:57 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:59.003 14:12:57 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:59.003 14:12:57 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:11:59.003 14:12:57 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:11:59.003 14:12:57 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:11:59.003 14:12:57 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:59.003 14:12:57 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:59.003 14:12:57 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:11:59.003 14:12:57 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:11:59.003 14:12:57 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:11:59.003 14:12:57 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:59.003 14:12:57 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:59.003 14:12:57 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:11:59.003 14:12:57 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:11:59.003 14:12:57 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:11:59.003 14:12:57 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:59.003 14:12:57 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:59.003 14:12:57 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:11:59.003 14:12:57 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:11:59.003 14:12:57 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:11:59.003 14:12:57 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:59.003 14:12:57 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:59.003 14:12:57 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:11:59.003 14:12:57 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:11:59.003 14:12:57 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:11:59.003 14:12:57 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:59.003 14:12:57 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:59.003 14:12:57 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme0n1 ]] 00:11:59.003 14:12:57 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:59.003 14:12:57 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:59.003 14:12:57 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:59.003 14:12:57 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n1 ]] 00:11:59.003 14:12:57 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:59.003 14:12:57 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:59.003 14:12:57 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:59.003 14:12:57 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n2 ]] 00:11:59.003 14:12:57 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:59.003 14:12:57 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:59.003 14:12:57 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:59.003 14:12:57 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n3 ]] 00:11:59.003 14:12:57 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:59.003 14:12:57 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:59.003 14:12:57 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:59.003 14:12:57 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme2n1 ]] 00:11:59.003 14:12:57 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:59.003 14:12:57 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:59.003 14:12:57 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:59.003 14:12:57 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme3n1 ]] 00:11:59.003 14:12:57 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:59.003 14:12:57 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:59.003 14:12:57 -- bdev/blockdev.sh@97 -- # (( 6 > 0 )) 00:11:59.003 14:12:57 -- bdev/blockdev.sh@98 -- # rpc_cmd 00:11:59.003 14:12:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.003 14:12:57 -- common/autotest_common.sh@10 -- # set +x 00:11:59.003 14:12:57 -- bdev/blockdev.sh@98 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme1n2 nvme1n2 io_uring' 'bdev_xnvme_create /dev/nvme1n3 nvme1n3 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:11:59.003 nvme0n1 00:11:59.003 nvme1n1 00:11:59.003 nvme1n2 00:11:59.003 nvme1n3 00:11:59.003 nvme2n1 00:11:59.003 nvme3n1 00:11:59.003 14:12:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.003 14:12:57 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:11:59.003 14:12:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.003 14:12:57 -- common/autotest_common.sh@10 -- # set +x 00:11:59.003 14:12:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.003 14:12:57 -- bdev/blockdev.sh@738 -- # cat 00:11:59.003 14:12:57 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:11:59.003 14:12:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.003 14:12:57 -- common/autotest_common.sh@10 -- # set +x 00:11:59.003 14:12:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.003 14:12:57 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:11:59.003 14:12:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.003 14:12:57 -- common/autotest_common.sh@10 -- # set +x 00:11:59.003 14:12:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.003 14:12:57 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:59.003 14:12:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.003 14:12:57 -- common/autotest_common.sh@10 -- # set +x 00:11:59.003 14:12:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.003 14:12:57 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:11:59.003 14:12:57 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:11:59.003 14:12:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.003 14:12:57 -- common/autotest_common.sh@10 -- # set +x 00:11:59.003 14:12:57 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:11:59.003 14:12:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.003 14:12:57 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:11:59.003 14:12:57 -- bdev/blockdev.sh@747 -- # jq -r .name 00:11:59.004 14:12:57 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "970fe5ac-c4d7-49af-a914-98c1514828a5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "970fe5ac-c4d7-49af-a914-98c1514828a5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "67a72b3e-bfbc-419f-8358-a8918a71ccdc"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "67a72b3e-bfbc-419f-8358-a8918a71ccdc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "360475e1-6e4d-472b-a747-7a64bfd95843"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "360475e1-6e4d-472b-a747-7a64bfd95843",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "eafc9bba-134e-4557-8921-494617bf13de"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "eafc9bba-134e-4557-8921-494617bf13de",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "00231919-56c2-4b31-b3b8-045a57ffdd52"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "00231919-56c2-4b31-b3b8-045a57ffdd52",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "71fb65ba-5a61-4aa1-8b54-c283587df9ba"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "71fb65ba-5a61-4aa1-8b54-c283587df9ba",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:11:59.004 14:12:57 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:11:59.004 14:12:57 -- bdev/blockdev.sh@750 -- # hello_world_bdev=nvme0n1 00:11:59.004 14:12:57 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:11:59.004 14:12:57 -- bdev/blockdev.sh@752 -- # killprocess 67420 00:11:59.004 14:12:57 -- common/autotest_common.sh@936 -- # '[' -z 67420 ']' 00:11:59.004 14:12:57 -- common/autotest_common.sh@940 -- # kill -0 67420 00:11:59.004 14:12:57 -- common/autotest_common.sh@941 -- # uname 00:11:59.004 14:12:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:59.004 14:12:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67420 00:11:59.004 killing process with pid 67420 00:11:59.004 14:12:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:59.004 14:12:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:59.004 14:12:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67420' 00:11:59.004 14:12:57 -- common/autotest_common.sh@955 -- # kill 67420 00:11:59.004 14:12:57 -- common/autotest_common.sh@960 -- # wait 67420 00:12:00.380 14:12:58 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:00.380 14:12:58 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:00.381 14:12:58 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:00.381 14:12:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:00.381 14:12:58 -- common/autotest_common.sh@10 -- # set +x 00:12:00.381 ************************************ 00:12:00.381 START TEST bdev_hello_world 00:12:00.381 ************************************ 00:12:00.381 14:12:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:00.381 [2024-11-19 14:12:58.785401] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:00.381 [2024-11-19 14:12:58.785506] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67807 ] 00:12:00.381 [2024-11-19 14:12:58.934297] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.639 [2024-11-19 14:12:59.106196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.898 [2024-11-19 14:12:59.411483] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:00.898 [2024-11-19 14:12:59.411529] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:12:00.898 [2024-11-19 14:12:59.411542] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:00.898 [2024-11-19 14:12:59.413094] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:00.898 [2024-11-19 14:12:59.413647] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:00.898 [2024-11-19 14:12:59.413671] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:00.898 [2024-11-19 14:12:59.414080] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:00.898 00:12:00.898 [2024-11-19 14:12:59.414103] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:01.833 00:12:01.833 real 0m1.356s 00:12:01.833 user 0m1.041s 00:12:01.833 sys 0m0.197s 00:12:01.833 14:13:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:01.833 14:13:00 -- common/autotest_common.sh@10 -- # set +x 00:12:01.834 ************************************ 00:12:01.834 END TEST bdev_hello_world 00:12:01.834 ************************************ 00:12:01.834 14:13:00 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:12:01.834 14:13:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:01.834 14:13:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:01.834 14:13:00 -- common/autotest_common.sh@10 -- # set +x 00:12:01.834 ************************************ 00:12:01.834 START TEST bdev_bounds 00:12:01.834 ************************************ 00:12:01.834 14:13:00 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:12:01.834 Process bdevio pid: 67844 00:12:01.834 14:13:00 -- bdev/blockdev.sh@288 -- # bdevio_pid=67844 00:12:01.834 14:13:00 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:01.834 14:13:00 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 67844' 00:12:01.834 14:13:00 -- bdev/blockdev.sh@291 -- # waitforlisten 67844 00:12:01.834 14:13:00 -- common/autotest_common.sh@829 -- # '[' -z 67844 ']' 00:12:01.834 14:13:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.834 14:13:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:01.834 14:13:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.834 14:13:00 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:01.834 14:13:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:01.834 14:13:00 -- common/autotest_common.sh@10 -- # set +x 00:12:01.834 [2024-11-19 14:13:00.198638] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:01.834 [2024-11-19 14:13:00.198740] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67844 ] 00:12:01.834 [2024-11-19 14:13:00.342992] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:02.092 [2024-11-19 14:13:00.519849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:02.093 [2024-11-19 14:13:00.520129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.093 [2024-11-19 14:13:00.520155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:02.660 14:13:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:02.660 14:13:01 -- common/autotest_common.sh@862 -- # return 0 00:12:02.660 14:13:01 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:02.660 I/O targets: 00:12:02.660 nvme0n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:02.660 nvme1n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:02.660 nvme1n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:02.660 nvme1n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:02.660 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:02.660 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:02.660 00:12:02.660 00:12:02.660 CUnit - A unit testing framework for C - Version 2.1-3 00:12:02.660 http://cunit.sourceforge.net/ 00:12:02.660 00:12:02.660 00:12:02.660 Suite: bdevio tests on: nvme3n1 00:12:02.660 Test: blockdev write read block ...passed 00:12:02.660 Test: blockdev write zeroes read block ...passed 00:12:02.660 Test: blockdev write zeroes read no split ...passed 00:12:02.660 Test: blockdev write zeroes read split ...passed 00:12:02.660 Test: blockdev write zeroes read split partial ...passed 00:12:02.660 Test: blockdev reset ...passed 00:12:02.660 Test: blockdev write read 8 blocks ...passed 00:12:02.660 Test: blockdev write read size > 128k ...passed 00:12:02.660 Test: blockdev write read invalid size ...passed 00:12:02.660 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:02.660 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:02.660 Test: blockdev write read max offset ...passed 00:12:02.660 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:02.660 Test: blockdev writev readv 8 blocks ...passed 00:12:02.660 Test: blockdev writev readv 30 x 1block ...passed 00:12:02.660 Test: blockdev writev readv block ...passed 00:12:02.660 Test: blockdev writev readv size > 128k ...passed 00:12:02.660 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:02.660 Test: blockdev comparev and writev ...passed 00:12:02.660 Test: blockdev nvme passthru rw ...passed 00:12:02.660 Test: blockdev nvme passthru vendor specific ...passed 00:12:02.660 Test: blockdev nvme admin passthru ...passed 00:12:02.660 Test: blockdev copy ...passed 00:12:02.660 Suite: bdevio tests on: nvme2n1 00:12:02.660 Test: blockdev write read block ...passed 00:12:02.660 Test: blockdev write zeroes read block ...passed 00:12:02.660 Test: blockdev write zeroes read no split ...passed 00:12:02.660 Test: blockdev write zeroes read split ...passed 00:12:02.919 Test: blockdev write zeroes read split partial ...passed 00:12:02.919 Test: blockdev reset ...passed 00:12:02.919 Test: blockdev write read 8 blocks ...passed 00:12:02.919 Test: blockdev write read size > 128k ...passed 00:12:02.919 Test: blockdev write read invalid size ...passed 00:12:02.919 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:02.919 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:02.919 Test: blockdev write read max offset ...passed 00:12:02.919 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:02.919 Test: blockdev writev readv 8 blocks ...passed 00:12:02.919 Test: blockdev writev readv 30 x 1block ...passed 00:12:02.919 Test: blockdev writev readv block ...passed 00:12:02.919 Test: blockdev writev readv size > 128k ...passed 00:12:02.919 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:02.919 Test: blockdev comparev and writev ...passed 00:12:02.919 Test: blockdev nvme passthru rw ...passed 00:12:02.919 Test: blockdev nvme passthru vendor specific ...passed 00:12:02.919 Test: blockdev nvme admin passthru ...passed 00:12:02.919 Test: blockdev copy ...passed 00:12:02.919 Suite: bdevio tests on: nvme1n3 00:12:02.919 Test: blockdev write read block ...passed 00:12:02.919 Test: blockdev write zeroes read block ...passed 00:12:02.919 Test: blockdev write zeroes read no split ...passed 00:12:02.919 Test: blockdev write zeroes read split ...passed 00:12:02.919 Test: blockdev write zeroes read split partial ...passed 00:12:02.919 Test: blockdev reset ...passed 00:12:02.919 Test: blockdev write read 8 blocks ...passed 00:12:02.919 Test: blockdev write read size > 128k ...passed 00:12:02.919 Test: blockdev write read invalid size ...passed 00:12:02.919 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:02.919 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:02.919 Test: blockdev write read max offset ...passed 00:12:02.919 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:02.919 Test: blockdev writev readv 8 blocks ...passed 00:12:02.919 Test: blockdev writev readv 30 x 1block ...passed 00:12:02.919 Test: blockdev writev readv block ...passed 00:12:02.919 Test: blockdev writev readv size > 128k ...passed 00:12:02.919 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:02.919 Test: blockdev comparev and writev ...passed 00:12:02.919 Test: blockdev nvme passthru rw ...passed 00:12:02.919 Test: blockdev nvme passthru vendor specific ...passed 00:12:02.919 Test: blockdev nvme admin passthru ...passed 00:12:02.919 Test: blockdev copy ...passed 00:12:02.919 Suite: bdevio tests on: nvme1n2 00:12:02.919 Test: blockdev write read block ...passed 00:12:02.919 Test: blockdev write zeroes read block ...passed 00:12:02.919 Test: blockdev write zeroes read no split ...passed 00:12:02.919 Test: blockdev write zeroes read split ...passed 00:12:02.919 Test: blockdev write zeroes read split partial ...passed 00:12:02.919 Test: blockdev reset ...passed 00:12:02.919 Test: blockdev write read 8 blocks ...passed 00:12:02.919 Test: blockdev write read size > 128k ...passed 00:12:02.919 Test: blockdev write read invalid size ...passed 00:12:02.919 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:02.919 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:02.919 Test: blockdev write read max offset ...passed 00:12:02.919 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:02.919 Test: blockdev writev readv 8 blocks ...passed 00:12:02.919 Test: blockdev writev readv 30 x 1block ...passed 00:12:02.919 Test: blockdev writev readv block ...passed 00:12:02.919 Test: blockdev writev readv size > 128k ...passed 00:12:02.919 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:02.919 Test: blockdev comparev and writev ...passed 00:12:02.919 Test: blockdev nvme passthru rw ...passed 00:12:02.919 Test: blockdev nvme passthru vendor specific ...passed 00:12:02.919 Test: blockdev nvme admin passthru ...passed 00:12:02.919 Test: blockdev copy ...passed 00:12:02.919 Suite: bdevio tests on: nvme1n1 00:12:02.919 Test: blockdev write read block ...passed 00:12:02.919 Test: blockdev write zeroes read block ...passed 00:12:02.919 Test: blockdev write zeroes read no split ...passed 00:12:02.919 Test: blockdev write zeroes read split ...passed 00:12:02.919 Test: blockdev write zeroes read split partial ...passed 00:12:02.919 Test: blockdev reset ...passed 00:12:02.919 Test: blockdev write read 8 blocks ...passed 00:12:02.919 Test: blockdev write read size > 128k ...passed 00:12:02.919 Test: blockdev write read invalid size ...passed 00:12:02.919 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:02.919 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:02.919 Test: blockdev write read max offset ...passed 00:12:02.919 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:02.919 Test: blockdev writev readv 8 blocks ...passed 00:12:02.919 Test: blockdev writev readv 30 x 1block ...passed 00:12:02.919 Test: blockdev writev readv block ...passed 00:12:02.919 Test: blockdev writev readv size > 128k ...passed 00:12:02.919 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:02.919 Test: blockdev comparev and writev ...passed 00:12:02.919 Test: blockdev nvme passthru rw ...passed 00:12:02.919 Test: blockdev nvme passthru vendor specific ...passed 00:12:02.919 Test: blockdev nvme admin passthru ...passed 00:12:02.919 Test: blockdev copy ...passed 00:12:02.919 Suite: bdevio tests on: nvme0n1 00:12:02.919 Test: blockdev write read block ...passed 00:12:02.919 Test: blockdev write zeroes read block ...passed 00:12:02.919 Test: blockdev write zeroes read no split ...passed 00:12:02.919 Test: blockdev write zeroes read split ...passed 00:12:03.178 Test: blockdev write zeroes read split partial ...passed 00:12:03.178 Test: blockdev reset ...passed 00:12:03.178 Test: blockdev write read 8 blocks ...passed 00:12:03.178 Test: blockdev write read size > 128k ...passed 00:12:03.178 Test: blockdev write read invalid size ...passed 00:12:03.178 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:03.178 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:03.178 Test: blockdev write read max offset ...passed 00:12:03.178 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:03.178 Test: blockdev writev readv 8 blocks ...passed 00:12:03.178 Test: blockdev writev readv 30 x 1block ...passed 00:12:03.178 Test: blockdev writev readv block ...passed 00:12:03.178 Test: blockdev writev readv size > 128k ...passed 00:12:03.178 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:03.178 Test: blockdev comparev and writev ...passed 00:12:03.178 Test: blockdev nvme passthru rw ...passed 00:12:03.178 Test: blockdev nvme passthru vendor specific ...passed 00:12:03.178 Test: blockdev nvme admin passthru ...passed 00:12:03.178 Test: blockdev copy ...passed 00:12:03.178 00:12:03.178 Run Summary: Type Total Ran Passed Failed Inactive 00:12:03.178 suites 6 6 n/a 0 0 00:12:03.178 tests 138 138 138 0 0 00:12:03.178 asserts 780 780 780 0 n/a 00:12:03.178 00:12:03.178 Elapsed time = 1.012 seconds 00:12:03.178 0 00:12:03.178 14:13:01 -- bdev/blockdev.sh@293 -- # killprocess 67844 00:12:03.178 14:13:01 -- common/autotest_common.sh@936 -- # '[' -z 67844 ']' 00:12:03.178 14:13:01 -- common/autotest_common.sh@940 -- # kill -0 67844 00:12:03.178 14:13:01 -- common/autotest_common.sh@941 -- # uname 00:12:03.178 14:13:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:03.178 14:13:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67844 00:12:03.178 14:13:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:03.178 killing process with pid 67844 00:12:03.178 14:13:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:03.178 14:13:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67844' 00:12:03.178 14:13:01 -- common/autotest_common.sh@955 -- # kill 67844 00:12:03.178 14:13:01 -- common/autotest_common.sh@960 -- # wait 67844 00:12:03.748 14:13:02 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:12:03.748 00:12:03.748 real 0m2.064s 00:12:03.748 user 0m4.834s 00:12:03.748 sys 0m0.290s 00:12:03.748 14:13:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:03.748 14:13:02 -- common/autotest_common.sh@10 -- # set +x 00:12:03.748 ************************************ 00:12:03.748 END TEST bdev_bounds 00:12:03.748 ************************************ 00:12:03.748 14:13:02 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:12:03.748 14:13:02 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:12:03.748 14:13:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:03.748 14:13:02 -- common/autotest_common.sh@10 -- # set +x 00:12:03.748 ************************************ 00:12:03.748 START TEST bdev_nbd 00:12:03.748 ************************************ 00:12:03.748 14:13:02 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:12:03.748 14:13:02 -- bdev/blockdev.sh@298 -- # uname -s 00:12:03.748 14:13:02 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:12:03.748 14:13:02 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:03.748 14:13:02 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:03.748 14:13:02 -- bdev/blockdev.sh@302 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:03.748 14:13:02 -- bdev/blockdev.sh@302 -- # local bdev_all 00:12:03.748 14:13:02 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:12:03.748 14:13:02 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:12:03.748 14:13:02 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:03.748 14:13:02 -- bdev/blockdev.sh@309 -- # local nbd_all 00:12:03.748 14:13:02 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:12:03.748 14:13:02 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:03.748 14:13:02 -- bdev/blockdev.sh@312 -- # local nbd_list 00:12:03.748 14:13:02 -- bdev/blockdev.sh@313 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:03.748 14:13:02 -- bdev/blockdev.sh@313 -- # local bdev_list 00:12:03.748 14:13:02 -- bdev/blockdev.sh@316 -- # nbd_pid=67899 00:12:03.748 14:13:02 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:03.748 14:13:02 -- bdev/blockdev.sh@318 -- # waitforlisten 67899 /var/tmp/spdk-nbd.sock 00:12:03.748 14:13:02 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:03.748 14:13:02 -- common/autotest_common.sh@829 -- # '[' -z 67899 ']' 00:12:03.748 14:13:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:03.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:03.748 14:13:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:03.748 14:13:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:03.748 14:13:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:03.748 14:13:02 -- common/autotest_common.sh@10 -- # set +x 00:12:04.008 [2024-11-19 14:13:02.335061] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:04.008 [2024-11-19 14:13:02.335161] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:04.008 [2024-11-19 14:13:02.482826] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.266 [2024-11-19 14:13:02.655377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.833 14:13:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:04.833 14:13:03 -- common/autotest_common.sh@862 -- # return 0 00:12:04.833 14:13:03 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:12:04.833 14:13:03 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:04.833 14:13:03 -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:04.833 14:13:03 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:04.833 14:13:03 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:12:04.833 14:13:03 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:04.833 14:13:03 -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:04.833 14:13:03 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:04.833 14:13:03 -- bdev/nbd_common.sh@24 -- # local i 00:12:04.833 14:13:03 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:04.833 14:13:03 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:04.833 14:13:03 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:04.833 14:13:03 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:12:04.833 14:13:03 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:04.833 14:13:03 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:04.833 14:13:03 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:04.833 14:13:03 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:04.833 14:13:03 -- common/autotest_common.sh@867 -- # local i 00:12:04.833 14:13:03 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:04.833 14:13:03 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:04.833 14:13:03 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:04.833 14:13:03 -- common/autotest_common.sh@871 -- # break 00:12:04.833 14:13:03 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:04.833 14:13:03 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:04.833 14:13:03 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:04.833 1+0 records in 00:12:04.833 1+0 records out 00:12:04.833 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435868 s, 9.4 MB/s 00:12:04.833 14:13:03 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.833 14:13:03 -- common/autotest_common.sh@884 -- # size=4096 00:12:04.833 14:13:03 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.833 14:13:03 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:04.833 14:13:03 -- common/autotest_common.sh@887 -- # return 0 00:12:04.833 14:13:03 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:04.833 14:13:03 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:04.833 14:13:03 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:12:05.092 14:13:03 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:05.092 14:13:03 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:05.092 14:13:03 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:05.092 14:13:03 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:05.092 14:13:03 -- common/autotest_common.sh@867 -- # local i 00:12:05.092 14:13:03 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:05.092 14:13:03 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:05.092 14:13:03 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:05.092 14:13:03 -- common/autotest_common.sh@871 -- # break 00:12:05.092 14:13:03 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:05.092 14:13:03 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:05.092 14:13:03 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:05.092 1+0 records in 00:12:05.092 1+0 records out 00:12:05.092 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00126227 s, 3.2 MB/s 00:12:05.092 14:13:03 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.092 14:13:03 -- common/autotest_common.sh@884 -- # size=4096 00:12:05.092 14:13:03 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.092 14:13:03 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:05.092 14:13:03 -- common/autotest_common.sh@887 -- # return 0 00:12:05.092 14:13:03 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:05.092 14:13:03 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:05.092 14:13:03 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 00:12:05.351 14:13:03 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:05.351 14:13:03 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:05.351 14:13:03 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:05.351 14:13:03 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:12:05.351 14:13:03 -- common/autotest_common.sh@867 -- # local i 00:12:05.351 14:13:03 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:05.351 14:13:03 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:05.351 14:13:03 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:12:05.351 14:13:03 -- common/autotest_common.sh@871 -- # break 00:12:05.351 14:13:03 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:05.351 14:13:03 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:05.351 14:13:03 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:05.351 1+0 records in 00:12:05.351 1+0 records out 00:12:05.351 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000728658 s, 5.6 MB/s 00:12:05.351 14:13:03 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.351 14:13:03 -- common/autotest_common.sh@884 -- # size=4096 00:12:05.351 14:13:03 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.351 14:13:03 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:05.351 14:13:03 -- common/autotest_common.sh@887 -- # return 0 00:12:05.351 14:13:03 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:05.351 14:13:03 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:05.351 14:13:03 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 00:12:05.610 14:13:03 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:05.610 14:13:03 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:05.610 14:13:04 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:05.610 14:13:04 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:12:05.610 14:13:04 -- common/autotest_common.sh@867 -- # local i 00:12:05.610 14:13:04 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:05.610 14:13:04 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:05.610 14:13:04 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:12:05.610 14:13:04 -- common/autotest_common.sh@871 -- # break 00:12:05.610 14:13:04 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:05.610 14:13:04 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:05.610 14:13:04 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:05.610 1+0 records in 00:12:05.610 1+0 records out 00:12:05.610 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000758935 s, 5.4 MB/s 00:12:05.610 14:13:04 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.610 14:13:04 -- common/autotest_common.sh@884 -- # size=4096 00:12:05.610 14:13:04 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.610 14:13:04 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:05.610 14:13:04 -- common/autotest_common.sh@887 -- # return 0 00:12:05.610 14:13:04 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:05.610 14:13:04 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:05.610 14:13:04 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:12:05.868 14:13:04 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:05.868 14:13:04 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:05.868 14:13:04 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:05.868 14:13:04 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:12:05.868 14:13:04 -- common/autotest_common.sh@867 -- # local i 00:12:05.868 14:13:04 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:05.868 14:13:04 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:05.868 14:13:04 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:12:05.868 14:13:04 -- common/autotest_common.sh@871 -- # break 00:12:05.868 14:13:04 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:05.868 14:13:04 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:05.868 14:13:04 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:05.868 1+0 records in 00:12:05.868 1+0 records out 00:12:05.868 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000807745 s, 5.1 MB/s 00:12:05.868 14:13:04 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.868 14:13:04 -- common/autotest_common.sh@884 -- # size=4096 00:12:05.868 14:13:04 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.868 14:13:04 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:05.868 14:13:04 -- common/autotest_common.sh@887 -- # return 0 00:12:05.868 14:13:04 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:05.868 14:13:04 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:05.868 14:13:04 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:12:06.127 14:13:04 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:06.127 14:13:04 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:06.127 14:13:04 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:06.127 14:13:04 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:12:06.127 14:13:04 -- common/autotest_common.sh@867 -- # local i 00:12:06.127 14:13:04 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:06.127 14:13:04 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:06.127 14:13:04 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:12:06.127 14:13:04 -- common/autotest_common.sh@871 -- # break 00:12:06.127 14:13:04 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:06.127 14:13:04 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:06.127 14:13:04 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:06.127 1+0 records in 00:12:06.127 1+0 records out 00:12:06.127 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00087787 s, 4.7 MB/s 00:12:06.127 14:13:04 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:06.127 14:13:04 -- common/autotest_common.sh@884 -- # size=4096 00:12:06.128 14:13:04 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:06.128 14:13:04 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:06.128 14:13:04 -- common/autotest_common.sh@887 -- # return 0 00:12:06.128 14:13:04 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:06.128 14:13:04 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:06.128 14:13:04 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:06.128 14:13:04 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:06.128 { 00:12:06.128 "nbd_device": "/dev/nbd0", 00:12:06.128 "bdev_name": "nvme0n1" 00:12:06.128 }, 00:12:06.128 { 00:12:06.128 "nbd_device": "/dev/nbd1", 00:12:06.128 "bdev_name": "nvme1n1" 00:12:06.128 }, 00:12:06.128 { 00:12:06.128 "nbd_device": "/dev/nbd2", 00:12:06.128 "bdev_name": "nvme1n2" 00:12:06.128 }, 00:12:06.128 { 00:12:06.128 "nbd_device": "/dev/nbd3", 00:12:06.128 "bdev_name": "nvme1n3" 00:12:06.128 }, 00:12:06.128 { 00:12:06.128 "nbd_device": "/dev/nbd4", 00:12:06.128 "bdev_name": "nvme2n1" 00:12:06.128 }, 00:12:06.128 { 00:12:06.128 "nbd_device": "/dev/nbd5", 00:12:06.128 "bdev_name": "nvme3n1" 00:12:06.128 } 00:12:06.128 ]' 00:12:06.128 14:13:04 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:06.128 14:13:04 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:06.128 14:13:04 -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:06.128 { 00:12:06.128 "nbd_device": "/dev/nbd0", 00:12:06.128 "bdev_name": "nvme0n1" 00:12:06.128 }, 00:12:06.128 { 00:12:06.128 "nbd_device": "/dev/nbd1", 00:12:06.128 "bdev_name": "nvme1n1" 00:12:06.128 }, 00:12:06.128 { 00:12:06.128 "nbd_device": "/dev/nbd2", 00:12:06.128 "bdev_name": "nvme1n2" 00:12:06.128 }, 00:12:06.128 { 00:12:06.128 "nbd_device": "/dev/nbd3", 00:12:06.128 "bdev_name": "nvme1n3" 00:12:06.128 }, 00:12:06.128 { 00:12:06.128 "nbd_device": "/dev/nbd4", 00:12:06.128 "bdev_name": "nvme2n1" 00:12:06.128 }, 00:12:06.128 { 00:12:06.128 "nbd_device": "/dev/nbd5", 00:12:06.128 "bdev_name": "nvme3n1" 00:12:06.128 } 00:12:06.128 ]' 00:12:06.128 14:13:04 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:12:06.128 14:13:04 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:06.128 14:13:04 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:12:06.128 14:13:04 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:06.128 14:13:04 -- bdev/nbd_common.sh@51 -- # local i 00:12:06.128 14:13:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:06.128 14:13:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:06.386 14:13:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:06.386 14:13:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:06.386 14:13:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:06.386 14:13:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:06.386 14:13:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:06.386 14:13:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:06.386 14:13:04 -- bdev/nbd_common.sh@41 -- # break 00:12:06.386 14:13:04 -- bdev/nbd_common.sh@45 -- # return 0 00:12:06.386 14:13:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:06.386 14:13:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:06.645 14:13:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:06.645 14:13:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:06.645 14:13:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:06.645 14:13:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:06.645 14:13:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:06.645 14:13:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:06.645 14:13:05 -- bdev/nbd_common.sh@41 -- # break 00:12:06.645 14:13:05 -- bdev/nbd_common.sh@45 -- # return 0 00:12:06.645 14:13:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:06.645 14:13:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:06.903 14:13:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:06.903 14:13:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:06.903 14:13:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:06.903 14:13:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:06.903 14:13:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:06.903 14:13:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:06.903 14:13:05 -- bdev/nbd_common.sh@41 -- # break 00:12:06.903 14:13:05 -- bdev/nbd_common.sh@45 -- # return 0 00:12:06.903 14:13:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:06.903 14:13:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:06.903 14:13:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:06.903 14:13:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:06.903 14:13:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:06.904 14:13:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:06.904 14:13:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:06.904 14:13:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:06.904 14:13:05 -- bdev/nbd_common.sh@41 -- # break 00:12:06.904 14:13:05 -- bdev/nbd_common.sh@45 -- # return 0 00:12:06.904 14:13:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:06.904 14:13:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:07.163 14:13:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:07.163 14:13:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:07.163 14:13:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:07.163 14:13:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:07.163 14:13:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:07.163 14:13:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:07.163 14:13:05 -- bdev/nbd_common.sh@41 -- # break 00:12:07.163 14:13:05 -- bdev/nbd_common.sh@45 -- # return 0 00:12:07.163 14:13:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:07.163 14:13:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:07.421 14:13:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:07.422 14:13:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:07.422 14:13:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:07.422 14:13:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:07.422 14:13:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:07.422 14:13:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:07.422 14:13:05 -- bdev/nbd_common.sh@41 -- # break 00:12:07.422 14:13:05 -- bdev/nbd_common.sh@45 -- # return 0 00:12:07.422 14:13:05 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:07.422 14:13:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:07.422 14:13:05 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:07.680 14:13:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:07.680 14:13:06 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:07.680 14:13:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:07.680 14:13:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:07.680 14:13:06 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:07.680 14:13:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:07.680 14:13:06 -- bdev/nbd_common.sh@65 -- # true 00:12:07.680 14:13:06 -- bdev/nbd_common.sh@65 -- # count=0 00:12:07.680 14:13:06 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:07.680 14:13:06 -- bdev/nbd_common.sh@122 -- # count=0 00:12:07.680 14:13:06 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:07.680 14:13:06 -- bdev/nbd_common.sh@127 -- # return 0 00:12:07.680 14:13:06 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:07.680 14:13:06 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:07.680 14:13:06 -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:07.680 14:13:06 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:07.680 14:13:06 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:07.680 14:13:06 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:07.680 14:13:06 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:07.680 14:13:06 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:07.680 14:13:06 -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:07.680 14:13:06 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:07.680 14:13:06 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:07.680 14:13:06 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:07.680 14:13:06 -- bdev/nbd_common.sh@12 -- # local i 00:12:07.680 14:13:06 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:07.680 14:13:06 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:07.680 14:13:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:12:07.939 /dev/nbd0 00:12:07.939 14:13:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:07.939 14:13:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:07.939 14:13:06 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:07.939 14:13:06 -- common/autotest_common.sh@867 -- # local i 00:12:07.939 14:13:06 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:07.939 14:13:06 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:07.939 14:13:06 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:07.939 14:13:06 -- common/autotest_common.sh@871 -- # break 00:12:07.939 14:13:06 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:07.939 14:13:06 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:07.939 14:13:06 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:07.939 1+0 records in 00:12:07.939 1+0 records out 00:12:07.939 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000643975 s, 6.4 MB/s 00:12:07.939 14:13:06 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.939 14:13:06 -- common/autotest_common.sh@884 -- # size=4096 00:12:07.939 14:13:06 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.939 14:13:06 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:07.939 14:13:06 -- common/autotest_common.sh@887 -- # return 0 00:12:07.939 14:13:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:07.939 14:13:06 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:07.939 14:13:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:12:07.939 /dev/nbd1 00:12:08.197 14:13:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:08.197 14:13:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:08.197 14:13:06 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:08.197 14:13:06 -- common/autotest_common.sh@867 -- # local i 00:12:08.197 14:13:06 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:08.197 14:13:06 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:08.197 14:13:06 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:08.197 14:13:06 -- common/autotest_common.sh@871 -- # break 00:12:08.197 14:13:06 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:08.197 14:13:06 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:08.197 14:13:06 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:08.197 1+0 records in 00:12:08.197 1+0 records out 00:12:08.197 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000755256 s, 5.4 MB/s 00:12:08.197 14:13:06 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.197 14:13:06 -- common/autotest_common.sh@884 -- # size=4096 00:12:08.197 14:13:06 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.197 14:13:06 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:08.197 14:13:06 -- common/autotest_common.sh@887 -- # return 0 00:12:08.197 14:13:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:08.197 14:13:06 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:08.197 14:13:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 /dev/nbd10 00:12:08.197 /dev/nbd10 00:12:08.197 14:13:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:08.197 14:13:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:08.197 14:13:06 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:12:08.197 14:13:06 -- common/autotest_common.sh@867 -- # local i 00:12:08.197 14:13:06 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:08.197 14:13:06 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:08.197 14:13:06 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:12:08.197 14:13:06 -- common/autotest_common.sh@871 -- # break 00:12:08.197 14:13:06 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:08.197 14:13:06 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:08.197 14:13:06 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:08.197 1+0 records in 00:12:08.197 1+0 records out 00:12:08.197 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000587968 s, 7.0 MB/s 00:12:08.197 14:13:06 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.197 14:13:06 -- common/autotest_common.sh@884 -- # size=4096 00:12:08.198 14:13:06 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.198 14:13:06 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:08.198 14:13:06 -- common/autotest_common.sh@887 -- # return 0 00:12:08.198 14:13:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:08.198 14:13:06 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:08.198 14:13:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 /dev/nbd11 00:12:08.456 /dev/nbd11 00:12:08.456 14:13:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:08.456 14:13:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:08.456 14:13:06 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:12:08.456 14:13:06 -- common/autotest_common.sh@867 -- # local i 00:12:08.456 14:13:06 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:08.456 14:13:06 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:08.456 14:13:06 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:12:08.456 14:13:06 -- common/autotest_common.sh@871 -- # break 00:12:08.456 14:13:06 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:08.456 14:13:06 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:08.456 14:13:06 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:08.456 1+0 records in 00:12:08.456 1+0 records out 00:12:08.456 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104015 s, 3.9 MB/s 00:12:08.456 14:13:06 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.456 14:13:06 -- common/autotest_common.sh@884 -- # size=4096 00:12:08.456 14:13:06 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.456 14:13:06 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:08.456 14:13:06 -- common/autotest_common.sh@887 -- # return 0 00:12:08.456 14:13:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:08.456 14:13:06 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:08.456 14:13:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:12:08.718 /dev/nbd12 00:12:08.718 14:13:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:08.718 14:13:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:08.718 14:13:07 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:12:08.718 14:13:07 -- common/autotest_common.sh@867 -- # local i 00:12:08.718 14:13:07 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:08.718 14:13:07 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:08.718 14:13:07 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:12:08.718 14:13:07 -- common/autotest_common.sh@871 -- # break 00:12:08.718 14:13:07 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:08.718 14:13:07 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:08.718 14:13:07 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:08.718 1+0 records in 00:12:08.718 1+0 records out 00:12:08.718 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000971207 s, 4.2 MB/s 00:12:08.718 14:13:07 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.718 14:13:07 -- common/autotest_common.sh@884 -- # size=4096 00:12:08.718 14:13:07 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.718 14:13:07 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:08.718 14:13:07 -- common/autotest_common.sh@887 -- # return 0 00:12:08.718 14:13:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:08.718 14:13:07 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:08.718 14:13:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:12:08.979 /dev/nbd13 00:12:08.979 14:13:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:08.979 14:13:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:08.979 14:13:07 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:12:08.979 14:13:07 -- common/autotest_common.sh@867 -- # local i 00:12:08.979 14:13:07 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:08.979 14:13:07 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:08.979 14:13:07 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:12:08.979 14:13:07 -- common/autotest_common.sh@871 -- # break 00:12:08.979 14:13:07 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:08.979 14:13:07 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:08.979 14:13:07 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:08.979 1+0 records in 00:12:08.979 1+0 records out 00:12:08.979 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000809424 s, 5.1 MB/s 00:12:08.979 14:13:07 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.979 14:13:07 -- common/autotest_common.sh@884 -- # size=4096 00:12:08.979 14:13:07 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.979 14:13:07 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:08.979 14:13:07 -- common/autotest_common.sh@887 -- # return 0 00:12:08.979 14:13:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:08.979 14:13:07 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:08.979 14:13:07 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:08.979 14:13:07 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:08.979 14:13:07 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:09.241 14:13:07 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:09.241 { 00:12:09.241 "nbd_device": "/dev/nbd0", 00:12:09.241 "bdev_name": "nvme0n1" 00:12:09.241 }, 00:12:09.241 { 00:12:09.241 "nbd_device": "/dev/nbd1", 00:12:09.241 "bdev_name": "nvme1n1" 00:12:09.241 }, 00:12:09.241 { 00:12:09.241 "nbd_device": "/dev/nbd10", 00:12:09.241 "bdev_name": "nvme1n2" 00:12:09.241 }, 00:12:09.241 { 00:12:09.241 "nbd_device": "/dev/nbd11", 00:12:09.241 "bdev_name": "nvme1n3" 00:12:09.241 }, 00:12:09.241 { 00:12:09.241 "nbd_device": "/dev/nbd12", 00:12:09.241 "bdev_name": "nvme2n1" 00:12:09.241 }, 00:12:09.241 { 00:12:09.241 "nbd_device": "/dev/nbd13", 00:12:09.241 "bdev_name": "nvme3n1" 00:12:09.241 } 00:12:09.241 ]' 00:12:09.241 14:13:07 -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:09.241 { 00:12:09.241 "nbd_device": "/dev/nbd0", 00:12:09.241 "bdev_name": "nvme0n1" 00:12:09.241 }, 00:12:09.241 { 00:12:09.241 "nbd_device": "/dev/nbd1", 00:12:09.241 "bdev_name": "nvme1n1" 00:12:09.241 }, 00:12:09.241 { 00:12:09.241 "nbd_device": "/dev/nbd10", 00:12:09.241 "bdev_name": "nvme1n2" 00:12:09.241 }, 00:12:09.241 { 00:12:09.241 "nbd_device": "/dev/nbd11", 00:12:09.241 "bdev_name": "nvme1n3" 00:12:09.241 }, 00:12:09.241 { 00:12:09.241 "nbd_device": "/dev/nbd12", 00:12:09.241 "bdev_name": "nvme2n1" 00:12:09.241 }, 00:12:09.241 { 00:12:09.241 "nbd_device": "/dev/nbd13", 00:12:09.241 "bdev_name": "nvme3n1" 00:12:09.241 } 00:12:09.241 ]' 00:12:09.241 14:13:07 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:09.241 14:13:07 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:09.241 /dev/nbd1 00:12:09.241 /dev/nbd10 00:12:09.241 /dev/nbd11 00:12:09.241 /dev/nbd12 00:12:09.241 /dev/nbd13' 00:12:09.241 14:13:07 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:09.241 /dev/nbd1 00:12:09.241 /dev/nbd10 00:12:09.241 /dev/nbd11 00:12:09.241 /dev/nbd12 00:12:09.241 /dev/nbd13' 00:12:09.241 14:13:07 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:09.241 14:13:07 -- bdev/nbd_common.sh@65 -- # count=6 00:12:09.241 14:13:07 -- bdev/nbd_common.sh@66 -- # echo 6 00:12:09.241 14:13:07 -- bdev/nbd_common.sh@95 -- # count=6 00:12:09.241 14:13:07 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:12:09.241 14:13:07 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:12:09.241 14:13:07 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:09.241 14:13:07 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:09.241 14:13:07 -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:09.241 14:13:07 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:09.241 14:13:07 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:09.241 14:13:07 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:09.241 256+0 records in 00:12:09.241 256+0 records out 00:12:09.241 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107718 s, 97.3 MB/s 00:12:09.241 14:13:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:09.241 14:13:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:09.502 256+0 records in 00:12:09.502 256+0 records out 00:12:09.502 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.245254 s, 4.3 MB/s 00:12:09.502 14:13:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:09.502 14:13:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:09.764 256+0 records in 00:12:09.764 256+0 records out 00:12:09.764 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.249114 s, 4.2 MB/s 00:12:09.764 14:13:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:09.764 14:13:08 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:10.025 256+0 records in 00:12:10.025 256+0 records out 00:12:10.025 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.239885 s, 4.4 MB/s 00:12:10.025 14:13:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:10.025 14:13:08 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:10.285 256+0 records in 00:12:10.285 256+0 records out 00:12:10.285 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.246903 s, 4.2 MB/s 00:12:10.285 14:13:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:10.285 14:13:08 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:10.547 256+0 records in 00:12:10.547 256+0 records out 00:12:10.547 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.312073 s, 3.4 MB/s 00:12:10.547 14:13:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:10.547 14:13:08 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:10.808 256+0 records in 00:12:10.808 256+0 records out 00:12:10.808 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.239466 s, 4.4 MB/s 00:12:10.808 14:13:09 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:12:10.808 14:13:09 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:10.808 14:13:09 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:10.808 14:13:09 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:10.808 14:13:09 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:10.808 14:13:09 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:10.808 14:13:09 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:10.808 14:13:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:10.808 14:13:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:10.808 14:13:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:10.808 14:13:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:10.808 14:13:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:10.808 14:13:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:10.808 14:13:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:10.808 14:13:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:10.808 14:13:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:10.808 14:13:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:10.808 14:13:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:10.808 14:13:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:10.808 14:13:09 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:10.808 14:13:09 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:10.808 14:13:09 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:10.808 14:13:09 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:10.808 14:13:09 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:10.808 14:13:09 -- bdev/nbd_common.sh@51 -- # local i 00:12:10.808 14:13:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:10.808 14:13:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:11.066 14:13:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:11.066 14:13:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:11.066 14:13:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:11.066 14:13:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:11.066 14:13:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:11.066 14:13:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:11.066 14:13:09 -- bdev/nbd_common.sh@41 -- # break 00:12:11.066 14:13:09 -- bdev/nbd_common.sh@45 -- # return 0 00:12:11.066 14:13:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:11.066 14:13:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:11.324 14:13:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:11.324 14:13:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:11.324 14:13:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:11.324 14:13:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:11.324 14:13:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:11.324 14:13:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:11.324 14:13:09 -- bdev/nbd_common.sh@41 -- # break 00:12:11.324 14:13:09 -- bdev/nbd_common.sh@45 -- # return 0 00:12:11.324 14:13:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:11.324 14:13:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:11.582 14:13:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:11.582 14:13:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:11.582 14:13:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:11.582 14:13:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:11.582 14:13:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:11.582 14:13:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:11.582 14:13:09 -- bdev/nbd_common.sh@41 -- # break 00:12:11.582 14:13:09 -- bdev/nbd_common.sh@45 -- # return 0 00:12:11.582 14:13:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:11.582 14:13:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:11.582 14:13:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:11.582 14:13:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:11.582 14:13:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:11.582 14:13:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:11.582 14:13:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:11.582 14:13:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:11.582 14:13:10 -- bdev/nbd_common.sh@41 -- # break 00:12:11.582 14:13:10 -- bdev/nbd_common.sh@45 -- # return 0 00:12:11.582 14:13:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:11.582 14:13:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:11.839 14:13:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:11.839 14:13:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:11.839 14:13:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:11.839 14:13:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:11.839 14:13:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:11.839 14:13:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:11.839 14:13:10 -- bdev/nbd_common.sh@41 -- # break 00:12:11.839 14:13:10 -- bdev/nbd_common.sh@45 -- # return 0 00:12:11.839 14:13:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:11.839 14:13:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:12.097 14:13:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:12.097 14:13:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:12.097 14:13:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:12.097 14:13:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:12.097 14:13:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:12.097 14:13:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:12.097 14:13:10 -- bdev/nbd_common.sh@41 -- # break 00:12:12.097 14:13:10 -- bdev/nbd_common.sh@45 -- # return 0 00:12:12.097 14:13:10 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:12.097 14:13:10 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:12.097 14:13:10 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:12.355 14:13:10 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:12.355 14:13:10 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:12.355 14:13:10 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:12.355 14:13:10 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:12.355 14:13:10 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:12.355 14:13:10 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:12.355 14:13:10 -- bdev/nbd_common.sh@65 -- # true 00:12:12.355 14:13:10 -- bdev/nbd_common.sh@65 -- # count=0 00:12:12.355 14:13:10 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:12.355 14:13:10 -- bdev/nbd_common.sh@104 -- # count=0 00:12:12.355 14:13:10 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:12.356 14:13:10 -- bdev/nbd_common.sh@109 -- # return 0 00:12:12.356 14:13:10 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:12.356 14:13:10 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:12.356 14:13:10 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:12.356 14:13:10 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:12:12.356 14:13:10 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:12:12.356 14:13:10 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:12.356 malloc_lvol_verify 00:12:12.356 14:13:10 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:12.614 ae8ccf7c-fb7e-40fb-b332-63cb5fd65dd7 00:12:12.614 14:13:11 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:12.872 65512020-858e-40e2-8aed-1aabf663aa3d 00:12:12.872 14:13:11 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:13.130 /dev/nbd0 00:12:13.130 14:13:11 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:12:13.130 mke2fs 1.47.0 (5-Feb-2023) 00:12:13.130 Discarding device blocks: 0/4096 done 00:12:13.130 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:13.130 00:12:13.130 Allocating group tables: 0/1 done 00:12:13.130 Writing inode tables: 0/1 done 00:12:13.130 Creating journal (1024 blocks): done 00:12:13.130 Writing superblocks and filesystem accounting information: 0/1 done 00:12:13.130 00:12:13.130 14:13:11 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:12:13.130 14:13:11 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:13.130 14:13:11 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:13.130 14:13:11 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:13.130 14:13:11 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:13.130 14:13:11 -- bdev/nbd_common.sh@51 -- # local i 00:12:13.130 14:13:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:13.130 14:13:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:13.130 14:13:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:13.389 14:13:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:13.389 14:13:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:13.389 14:13:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:13.389 14:13:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:13.389 14:13:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:13.389 14:13:11 -- bdev/nbd_common.sh@41 -- # break 00:12:13.389 14:13:11 -- bdev/nbd_common.sh@45 -- # return 0 00:12:13.389 14:13:11 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:12:13.389 14:13:11 -- bdev/nbd_common.sh@147 -- # return 0 00:12:13.389 14:13:11 -- bdev/blockdev.sh@324 -- # killprocess 67899 00:12:13.389 14:13:11 -- common/autotest_common.sh@936 -- # '[' -z 67899 ']' 00:12:13.389 14:13:11 -- common/autotest_common.sh@940 -- # kill -0 67899 00:12:13.389 14:13:11 -- common/autotest_common.sh@941 -- # uname 00:12:13.389 14:13:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:13.389 14:13:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67899 00:12:13.389 killing process with pid 67899 00:12:13.389 14:13:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:13.389 14:13:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:13.389 14:13:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67899' 00:12:13.389 14:13:11 -- common/autotest_common.sh@955 -- # kill 67899 00:12:13.389 14:13:11 -- common/autotest_common.sh@960 -- # wait 67899 00:12:13.957 14:13:12 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:12:13.957 00:12:13.957 real 0m10.145s 00:12:13.957 user 0m13.473s 00:12:13.957 sys 0m3.447s 00:12:13.957 14:13:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:13.957 14:13:12 -- common/autotest_common.sh@10 -- # set +x 00:12:13.957 ************************************ 00:12:13.957 END TEST bdev_nbd 00:12:13.957 ************************************ 00:12:13.957 14:13:12 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:12:13.957 14:13:12 -- bdev/blockdev.sh@762 -- # '[' xnvme = nvme ']' 00:12:13.957 14:13:12 -- bdev/blockdev.sh@762 -- # '[' xnvme = gpt ']' 00:12:13.957 14:13:12 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:12:13.957 14:13:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:13.957 14:13:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:13.957 14:13:12 -- common/autotest_common.sh@10 -- # set +x 00:12:13.957 ************************************ 00:12:13.957 START TEST bdev_fio 00:12:13.957 ************************************ 00:12:13.957 14:13:12 -- common/autotest_common.sh@1114 -- # fio_test_suite '' 00:12:13.957 14:13:12 -- bdev/blockdev.sh@329 -- # local env_context 00:12:13.957 14:13:12 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:12:13.957 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:12:13.957 14:13:12 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:13.958 14:13:12 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:12:13.958 14:13:12 -- bdev/blockdev.sh@337 -- # echo '' 00:12:13.958 14:13:12 -- bdev/blockdev.sh@337 -- # env_context= 00:12:13.958 14:13:12 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:12:13.958 14:13:12 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:13.958 14:13:12 -- common/autotest_common.sh@1270 -- # local workload=verify 00:12:13.958 14:13:12 -- common/autotest_common.sh@1271 -- # local bdev_type=AIO 00:12:13.958 14:13:12 -- common/autotest_common.sh@1272 -- # local env_context= 00:12:13.958 14:13:12 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:12:13.958 14:13:12 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:13.958 14:13:12 -- common/autotest_common.sh@1280 -- # '[' -z verify ']' 00:12:13.958 14:13:12 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:12:13.958 14:13:12 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:13.958 14:13:12 -- common/autotest_common.sh@1290 -- # cat 00:12:13.958 14:13:12 -- common/autotest_common.sh@1302 -- # '[' verify == verify ']' 00:12:13.958 14:13:12 -- common/autotest_common.sh@1303 -- # cat 00:12:13.958 14:13:12 -- common/autotest_common.sh@1312 -- # '[' AIO == AIO ']' 00:12:13.958 14:13:12 -- common/autotest_common.sh@1313 -- # /usr/src/fio/fio --version 00:12:14.218 14:13:12 -- common/autotest_common.sh@1313 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:14.218 14:13:12 -- common/autotest_common.sh@1314 -- # echo serialize_overlap=1 00:12:14.218 14:13:12 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:14.218 14:13:12 -- bdev/blockdev.sh@340 -- # echo '[job_nvme0n1]' 00:12:14.218 14:13:12 -- bdev/blockdev.sh@341 -- # echo filename=nvme0n1 00:12:14.218 14:13:12 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:14.218 14:13:12 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n1]' 00:12:14.218 14:13:12 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n1 00:12:14.218 14:13:12 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:14.218 14:13:12 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n2]' 00:12:14.218 14:13:12 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n2 00:12:14.219 14:13:12 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:14.219 14:13:12 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n3]' 00:12:14.219 14:13:12 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n3 00:12:14.219 14:13:12 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:14.219 14:13:12 -- bdev/blockdev.sh@340 -- # echo '[job_nvme2n1]' 00:12:14.219 14:13:12 -- bdev/blockdev.sh@341 -- # echo filename=nvme2n1 00:12:14.219 14:13:12 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:14.219 14:13:12 -- bdev/blockdev.sh@340 -- # echo '[job_nvme3n1]' 00:12:14.219 14:13:12 -- bdev/blockdev.sh@341 -- # echo filename=nvme3n1 00:12:14.219 14:13:12 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:12:14.219 14:13:12 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:14.219 14:13:12 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:12:14.219 14:13:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:14.219 14:13:12 -- common/autotest_common.sh@10 -- # set +x 00:12:14.219 ************************************ 00:12:14.219 START TEST bdev_fio_rw_verify 00:12:14.219 ************************************ 00:12:14.219 14:13:12 -- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:14.219 14:13:12 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:14.219 14:13:12 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:12:14.219 14:13:12 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:14.219 14:13:12 -- common/autotest_common.sh@1328 -- # local sanitizers 00:12:14.219 14:13:12 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:14.219 14:13:12 -- common/autotest_common.sh@1330 -- # shift 00:12:14.219 14:13:12 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:12:14.219 14:13:12 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:12:14.219 14:13:12 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:14.219 14:13:12 -- common/autotest_common.sh@1334 -- # grep libasan 00:12:14.219 14:13:12 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:12:14.219 14:13:12 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:14.219 14:13:12 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:14.219 14:13:12 -- common/autotest_common.sh@1336 -- # break 00:12:14.219 14:13:12 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:14.219 14:13:12 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:14.219 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:14.219 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:14.219 job_nvme1n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:14.219 job_nvme1n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:14.219 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:14.219 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:14.219 fio-3.35 00:12:14.219 Starting 6 threads 00:12:26.479 00:12:26.479 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=68296: Tue Nov 19 14:13:23 2024 00:12:26.479 read: IOPS=11.8k, BW=46.3MiB/s (48.5MB/s)(463MiB/10004msec) 00:12:26.479 slat (usec): min=2, max=2041, avg= 6.58, stdev=16.65 00:12:26.479 clat (usec): min=108, max=7146, avg=1707.08, stdev=819.15 00:12:26.479 lat (usec): min=111, max=7150, avg=1713.66, stdev=819.73 00:12:26.480 clat percentiles (usec): 00:12:26.480 | 50.000th=[ 1598], 99.000th=[ 4293], 99.900th=[ 5800], 99.990th=[ 6915], 00:12:26.480 | 99.999th=[ 7111] 00:12:26.480 write: IOPS=12.2k, BW=47.6MiB/s (49.9MB/s)(476MiB/10004msec); 0 zone resets 00:12:26.480 slat (usec): min=12, max=8327, avg=42.80, stdev=160.93 00:12:26.480 clat (usec): min=110, max=11798, avg=1943.44, stdev=900.88 00:12:26.480 lat (usec): min=133, max=11833, avg=1986.24, stdev=914.80 00:12:26.480 clat percentiles (usec): 00:12:26.480 | 50.000th=[ 1795], 99.000th=[ 4686], 99.900th=[ 6325], 99.990th=[ 7963], 00:12:26.480 | 99.999th=[11731] 00:12:26.480 bw ( KiB/s): min=38772, max=57888, per=100.00%, avg=48917.63, stdev=741.92, samples=114 00:12:26.480 iops : min= 9692, max=14472, avg=12228.68, stdev=185.50, samples=114 00:12:26.480 lat (usec) : 250=0.31%, 500=1.98%, 750=4.81%, 1000=8.09% 00:12:26.480 lat (msec) : 2=49.57%, 4=32.99%, 10=2.25%, 20=0.01% 00:12:26.480 cpu : usr=47.75%, sys=30.00%, ctx=5862, majf=0, minf=14697 00:12:26.480 IO depths : 1=11.4%, 2=23.8%, 4=51.2%, 8=13.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:26.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:26.480 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:26.480 issued rwts: total=118545,121888,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:26.480 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:26.480 00:12:26.480 Run status group 0 (all jobs): 00:12:26.480 READ: bw=46.3MiB/s (48.5MB/s), 46.3MiB/s-46.3MiB/s (48.5MB/s-48.5MB/s), io=463MiB (486MB), run=10004-10004msec 00:12:26.480 WRITE: bw=47.6MiB/s (49.9MB/s), 47.6MiB/s-47.6MiB/s (49.9MB/s-49.9MB/s), io=476MiB (499MB), run=10004-10004msec 00:12:26.480 ----------------------------------------------------- 00:12:26.480 Suppressions used: 00:12:26.480 count bytes template 00:12:26.480 6 48 /usr/src/fio/parse.c 00:12:26.480 3278 314688 /usr/src/fio/iolog.c 00:12:26.480 1 8 libtcmalloc_minimal.so 00:12:26.480 1 904 libcrypto.so 00:12:26.480 ----------------------------------------------------- 00:12:26.480 00:12:26.480 00:12:26.480 real 0m11.889s 00:12:26.480 user 0m30.183s 00:12:26.480 sys 0m18.380s 00:12:26.480 14:13:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:26.480 ************************************ 00:12:26.480 END TEST bdev_fio_rw_verify 00:12:26.480 ************************************ 00:12:26.480 14:13:24 -- common/autotest_common.sh@10 -- # set +x 00:12:26.480 14:13:24 -- bdev/blockdev.sh@348 -- # rm -f 00:12:26.480 14:13:24 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:26.480 14:13:24 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:12:26.480 14:13:24 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:26.480 14:13:24 -- common/autotest_common.sh@1270 -- # local workload=trim 00:12:26.480 14:13:24 -- common/autotest_common.sh@1271 -- # local bdev_type= 00:12:26.480 14:13:24 -- common/autotest_common.sh@1272 -- # local env_context= 00:12:26.480 14:13:24 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:12:26.480 14:13:24 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:26.480 14:13:24 -- common/autotest_common.sh@1280 -- # '[' -z trim ']' 00:12:26.480 14:13:24 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:12:26.480 14:13:24 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:26.480 14:13:24 -- common/autotest_common.sh@1290 -- # cat 00:12:26.480 14:13:24 -- common/autotest_common.sh@1302 -- # '[' trim == verify ']' 00:12:26.480 14:13:24 -- common/autotest_common.sh@1317 -- # '[' trim == trim ']' 00:12:26.480 14:13:24 -- common/autotest_common.sh@1318 -- # echo rw=trimwrite 00:12:26.480 14:13:24 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:26.480 14:13:24 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "970fe5ac-c4d7-49af-a914-98c1514828a5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "970fe5ac-c4d7-49af-a914-98c1514828a5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "67a72b3e-bfbc-419f-8358-a8918a71ccdc"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "67a72b3e-bfbc-419f-8358-a8918a71ccdc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "360475e1-6e4d-472b-a747-7a64bfd95843"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "360475e1-6e4d-472b-a747-7a64bfd95843",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "eafc9bba-134e-4557-8921-494617bf13de"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "eafc9bba-134e-4557-8921-494617bf13de",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "00231919-56c2-4b31-b3b8-045a57ffdd52"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "00231919-56c2-4b31-b3b8-045a57ffdd52",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "71fb65ba-5a61-4aa1-8b54-c283587df9ba"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "71fb65ba-5a61-4aa1-8b54-c283587df9ba",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:12:26.480 14:13:24 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:12:26.480 14:13:24 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:26.480 /home/vagrant/spdk_repo/spdk 00:12:26.480 14:13:24 -- bdev/blockdev.sh@360 -- # popd 00:12:26.480 14:13:24 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:12:26.480 14:13:24 -- bdev/blockdev.sh@362 -- # return 0 00:12:26.480 00:12:26.480 real 0m12.068s 00:12:26.480 user 0m30.262s 00:12:26.480 sys 0m18.459s 00:12:26.480 14:13:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:26.480 ************************************ 00:12:26.480 END TEST bdev_fio 00:12:26.480 ************************************ 00:12:26.480 14:13:24 -- common/autotest_common.sh@10 -- # set +x 00:12:26.480 14:13:24 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:26.480 14:13:24 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:26.480 14:13:24 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:12:26.480 14:13:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:26.480 14:13:24 -- common/autotest_common.sh@10 -- # set +x 00:12:26.480 ************************************ 00:12:26.481 START TEST bdev_verify 00:12:26.481 ************************************ 00:12:26.481 14:13:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:26.481 [2024-11-19 14:13:24.693524] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:26.481 [2024-11-19 14:13:24.693667] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68471 ] 00:12:26.481 [2024-11-19 14:13:24.848229] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:26.742 [2024-11-19 14:13:25.067093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.742 [2024-11-19 14:13:25.067176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.004 Running I/O for 5 seconds... 00:12:32.297 00:12:32.297 Latency(us) 00:12:32.297 [2024-11-19T14:13:30.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:32.297 [2024-11-19T14:13:30.859Z] Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:32.297 Verification LBA range: start 0x0 length 0x20000 00:12:32.297 nvme0n1 : 5.08 2323.76 9.08 0.00 0.00 54952.56 9477.51 77030.01 00:12:32.297 [2024-11-19T14:13:30.859Z] Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:32.297 Verification LBA range: start 0x20000 length 0x20000 00:12:32.297 nvme0n1 : 5.06 2105.23 8.22 0.00 0.00 60618.26 15426.17 81869.59 00:12:32.297 [2024-11-19T14:13:30.859Z] Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:32.297 Verification LBA range: start 0x0 length 0x80000 00:12:32.297 nvme1n1 : 5.09 2291.83 8.95 0.00 0.00 55546.96 14216.27 77030.01 00:12:32.297 [2024-11-19T14:13:30.859Z] Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:32.297 Verification LBA range: start 0x80000 length 0x80000 00:12:32.297 nvme1n1 : 5.07 2025.56 7.91 0.00 0.00 62864.82 13812.97 79853.10 00:12:32.297 [2024-11-19T14:13:30.859Z] Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:32.297 Verification LBA range: start 0x0 length 0x80000 00:12:32.297 nvme1n2 : 5.08 2209.27 8.63 0.00 0.00 57557.68 5545.35 71383.83 00:12:32.297 [2024-11-19T14:13:30.859Z] Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:32.297 Verification LBA range: start 0x80000 length 0x80000 00:12:32.297 nvme1n2 : 5.07 1986.82 7.76 0.00 0.00 64087.95 16232.76 79853.10 00:12:32.297 [2024-11-19T14:13:30.859Z] Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:32.297 Verification LBA range: start 0x0 length 0x80000 00:12:32.297 nvme1n3 : 5.09 2253.13 8.80 0.00 0.00 56347.93 10233.70 73400.32 00:12:32.297 [2024-11-19T14:13:30.859Z] Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:32.297 Verification LBA range: start 0x80000 length 0x80000 00:12:32.297 nvme1n3 : 5.07 2101.27 8.21 0.00 0.00 60512.15 16031.11 83886.08 00:12:32.297 [2024-11-19T14:13:30.859Z] Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:32.297 Verification LBA range: start 0x0 length 0xbd0bd 00:12:32.297 nvme2n1 : 5.09 2149.87 8.40 0.00 0.00 58941.02 8368.44 97598.23 00:12:32.297 [2024-11-19T14:13:30.859Z] Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:32.297 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:12:32.297 nvme2n1 : 5.08 1832.92 7.16 0.00 0.00 69357.38 8721.33 119376.34 00:12:32.297 [2024-11-19T14:13:30.859Z] Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:32.297 Verification LBA range: start 0x0 length 0xa0000 00:12:32.297 nvme3n1 : 5.09 2272.84 8.88 0.00 0.00 55704.54 3705.30 69770.63 00:12:32.297 [2024-11-19T14:13:30.859Z] Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:32.297 Verification LBA range: start 0xa0000 length 0xa0000 00:12:32.297 nvme3n1 : 5.08 1987.78 7.76 0.00 0.00 63637.77 6225.92 94371.84 00:12:32.297 [2024-11-19T14:13:30.859Z] =================================================================================================================== 00:12:32.297 [2024-11-19T14:13:30.859Z] Total : 25540.27 99.77 0.00 0.00 59725.37 3705.30 119376.34 00:12:33.238 00:12:33.238 real 0m6.889s 00:12:33.238 user 0m8.606s 00:12:33.238 sys 0m3.134s 00:12:33.238 14:13:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:33.239 ************************************ 00:12:33.239 END TEST bdev_verify 00:12:33.239 ************************************ 00:12:33.239 14:13:31 -- common/autotest_common.sh@10 -- # set +x 00:12:33.239 14:13:31 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:33.239 14:13:31 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:12:33.239 14:13:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:33.239 14:13:31 -- common/autotest_common.sh@10 -- # set +x 00:12:33.239 ************************************ 00:12:33.239 START TEST bdev_verify_big_io 00:12:33.239 ************************************ 00:12:33.239 14:13:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:33.239 [2024-11-19 14:13:31.648706] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:33.239 [2024-11-19 14:13:31.648837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68571 ] 00:12:33.500 [2024-11-19 14:13:31.799450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:33.500 [2024-11-19 14:13:32.020260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.500 [2024-11-19 14:13:32.020348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.074 Running I/O for 5 seconds... 00:12:40.663 00:12:40.663 Latency(us) 00:12:40.663 [2024-11-19T14:13:39.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:40.663 [2024-11-19T14:13:39.225Z] Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:40.663 Verification LBA range: start 0x0 length 0x2000 00:12:40.663 nvme0n1 : 5.59 281.21 17.58 0.00 0.00 433067.07 62107.96 703352.52 00:12:40.663 [2024-11-19T14:13:39.225Z] Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:40.663 Verification LBA range: start 0x2000 length 0x2000 00:12:40.663 nvme0n1 : 5.57 265.19 16.57 0.00 0.00 473833.26 88725.66 545259.52 00:12:40.663 [2024-11-19T14:13:39.225Z] Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:40.663 Verification LBA range: start 0x0 length 0x8000 00:12:40.663 nvme1n1 : 5.63 262.90 16.43 0.00 0.00 459966.60 37506.76 683994.19 00:12:40.663 [2024-11-19T14:13:39.225Z] Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:40.663 Verification LBA range: start 0x8000 length 0x8000 00:12:40.663 nvme1n1 : 5.58 297.54 18.60 0.00 0.00 417490.06 18753.38 632371.99 00:12:40.663 [2024-11-19T14:13:39.225Z] Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:40.663 Verification LBA range: start 0x0 length 0x8000 00:12:40.663 nvme1n2 : 5.63 262.63 16.41 0.00 0.00 453084.54 68560.74 767880.27 00:12:40.663 [2024-11-19T14:13:39.225Z] Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:40.663 Verification LBA range: start 0x8000 length 0x8000 00:12:40.663 nvme1n2 : 5.58 281.69 17.61 0.00 0.00 435522.86 19862.45 554938.68 00:12:40.663 [2024-11-19T14:13:39.225Z] Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:40.663 Verification LBA range: start 0x0 length 0x8000 00:12:40.663 nvme1n3 : 5.66 276.81 17.30 0.00 0.00 423255.94 29844.09 551712.30 00:12:40.663 [2024-11-19T14:13:39.225Z] Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:40.663 Verification LBA range: start 0x8000 length 0x8000 00:12:40.663 nvme1n3 : 5.58 232.51 14.53 0.00 0.00 511998.44 77836.60 509769.26 00:12:40.663 [2024-11-19T14:13:39.225Z] Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:40.663 Verification LBA range: start 0x0 length 0xbd0b 00:12:40.663 nvme2n1 : 5.66 350.52 21.91 0.00 0.00 324816.48 22887.19 451694.28 00:12:40.663 [2024-11-19T14:13:39.225Z] Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:40.663 Verification LBA range: start 0xbd0b length 0xbd0b 00:12:40.663 nvme2n1 : 5.58 342.01 21.38 0.00 0.00 344306.24 18148.43 416204.01 00:12:40.663 [2024-11-19T14:13:39.225Z] Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:40.663 Verification LBA range: start 0x0 length 0xa000 00:12:40.663 nvme3n1 : 5.72 321.14 20.07 0.00 0.00 345289.68 1487.16 461373.44 00:12:40.663 [2024-11-19T14:13:39.225Z] Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:40.663 Verification LBA range: start 0xa000 length 0xa000 00:12:40.663 nvme3n1 : 5.58 313.02 19.56 0.00 0.00 370177.91 5066.44 493637.32 00:12:40.663 [2024-11-19T14:13:39.225Z] =================================================================================================================== 00:12:40.663 [2024-11-19T14:13:39.225Z] Total : 3487.18 217.95 0.00 0.00 409526.59 1487.16 767880.27 00:12:40.924 00:12:40.924 real 0m7.738s 00:12:40.924 user 0m13.718s 00:12:40.924 sys 0m0.620s 00:12:40.924 14:13:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:40.924 ************************************ 00:12:40.924 END TEST bdev_verify_big_io 00:12:40.924 ************************************ 00:12:40.924 14:13:39 -- common/autotest_common.sh@10 -- # set +x 00:12:40.924 14:13:39 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:40.924 14:13:39 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:12:40.924 14:13:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:40.924 14:13:39 -- common/autotest_common.sh@10 -- # set +x 00:12:40.924 ************************************ 00:12:40.924 START TEST bdev_write_zeroes 00:12:40.924 ************************************ 00:12:40.924 14:13:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:40.924 [2024-11-19 14:13:39.452541] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:40.924 [2024-11-19 14:13:39.452669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68676 ] 00:12:41.185 [2024-11-19 14:13:39.599329] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.447 [2024-11-19 14:13:39.818640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.709 Running I/O for 1 seconds... 00:12:43.092 00:12:43.092 Latency(us) 00:12:43.092 [2024-11-19T14:13:41.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.092 [2024-11-19T14:13:41.654Z] Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:43.092 nvme0n1 : 1.01 11515.18 44.98 0.00 0.00 11104.41 8721.33 21778.12 00:12:43.092 [2024-11-19T14:13:41.654Z] Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:43.092 nvme1n1 : 1.01 11499.86 44.92 0.00 0.00 11109.21 8771.74 20265.75 00:12:43.092 [2024-11-19T14:13:41.654Z] Job: nvme1n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:43.092 nvme1n2 : 1.01 11485.06 44.86 0.00 0.00 11113.93 8771.74 18955.03 00:12:43.092 [2024-11-19T14:13:41.654Z] Job: nvme1n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:43.092 nvme1n3 : 1.02 11470.16 44.81 0.00 0.00 11118.06 8620.50 20064.10 00:12:43.092 [2024-11-19T14:13:41.654Z] Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:43.092 nvme2n1 : 1.01 12515.05 48.89 0.00 0.00 10182.19 2974.33 21979.77 00:12:43.092 [2024-11-19T14:13:41.654Z] Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:43.092 nvme3n1 : 1.02 11454.84 44.75 0.00 0.00 11090.42 8469.27 24802.86 00:12:43.092 [2024-11-19T14:13:41.654Z] =================================================================================================================== 00:12:43.092 [2024-11-19T14:13:41.654Z] Total : 69940.15 273.20 0.00 0.00 10942.25 2974.33 24802.86 00:12:43.663 00:12:43.663 real 0m2.759s 00:12:43.663 user 0m2.093s 00:12:43.663 sys 0m0.486s 00:12:43.664 14:13:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:43.664 ************************************ 00:12:43.664 END TEST bdev_write_zeroes 00:12:43.664 ************************************ 00:12:43.664 14:13:42 -- common/autotest_common.sh@10 -- # set +x 00:12:43.664 14:13:42 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:43.664 14:13:42 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:12:43.664 14:13:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:43.664 14:13:42 -- common/autotest_common.sh@10 -- # set +x 00:12:43.664 ************************************ 00:12:43.664 START TEST bdev_json_nonenclosed 00:12:43.664 ************************************ 00:12:43.664 14:13:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:43.925 [2024-11-19 14:13:42.277683] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:43.925 [2024-11-19 14:13:42.277813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68732 ] 00:12:43.925 [2024-11-19 14:13:42.432047] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.186 [2024-11-19 14:13:42.653763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.186 [2024-11-19 14:13:42.653962] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:12:44.186 [2024-11-19 14:13:42.653989] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:44.448 00:12:44.448 real 0m0.755s 00:12:44.448 user 0m0.516s 00:12:44.448 sys 0m0.131s 00:12:44.448 14:13:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:44.448 ************************************ 00:12:44.448 END TEST bdev_json_nonenclosed 00:12:44.448 ************************************ 00:12:44.448 14:13:42 -- common/autotest_common.sh@10 -- # set +x 00:12:44.708 14:13:43 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:44.708 14:13:43 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:12:44.708 14:13:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:44.708 14:13:43 -- common/autotest_common.sh@10 -- # set +x 00:12:44.708 ************************************ 00:12:44.708 START TEST bdev_json_nonarray 00:12:44.708 ************************************ 00:12:44.708 14:13:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:44.708 [2024-11-19 14:13:43.089806] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:44.708 [2024-11-19 14:13:43.089956] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68762 ] 00:12:44.708 [2024-11-19 14:13:43.244297] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.969 [2024-11-19 14:13:43.464294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.969 [2024-11-19 14:13:43.464498] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:12:44.969 [2024-11-19 14:13:43.464518] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:45.231 00:12:45.231 real 0m0.748s 00:12:45.231 user 0m0.511s 00:12:45.231 sys 0m0.129s 00:12:45.231 14:13:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:45.231 ************************************ 00:12:45.231 END TEST bdev_json_nonarray 00:12:45.231 ************************************ 00:12:45.231 14:13:43 -- common/autotest_common.sh@10 -- # set +x 00:12:45.493 14:13:43 -- bdev/blockdev.sh@785 -- # [[ xnvme == bdev ]] 00:12:45.493 14:13:43 -- bdev/blockdev.sh@792 -- # [[ xnvme == gpt ]] 00:12:45.493 14:13:43 -- bdev/blockdev.sh@796 -- # [[ xnvme == crypto_sw ]] 00:12:45.493 14:13:43 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:12:45.493 14:13:43 -- bdev/blockdev.sh@809 -- # cleanup 00:12:45.493 14:13:43 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:12:45.493 14:13:43 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:45.493 14:13:43 -- bdev/blockdev.sh@24 -- # [[ xnvme == rbd ]] 00:12:45.493 14:13:43 -- bdev/blockdev.sh@28 -- # [[ xnvme == daos ]] 00:12:45.493 14:13:43 -- bdev/blockdev.sh@32 -- # [[ xnvme = \g\p\t ]] 00:12:45.493 14:13:43 -- bdev/blockdev.sh@38 -- # [[ xnvme == xnvme ]] 00:12:45.493 14:13:43 -- bdev/blockdev.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:46.437 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:48.350 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:12:52.559 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:12:52.559 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:12:52.559 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:12:52.559 ************************************ 00:12:52.559 END TEST blockdev_xnvme 00:12:52.559 ************************************ 00:12:52.559 00:12:52.559 real 1m1.014s 00:12:52.559 user 1m24.991s 00:12:52.559 sys 0m39.917s 00:12:52.559 14:13:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:52.559 14:13:50 -- common/autotest_common.sh@10 -- # set +x 00:12:52.559 14:13:50 -- spdk/autotest.sh@246 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:12:52.559 14:13:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:52.559 14:13:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:52.559 14:13:50 -- common/autotest_common.sh@10 -- # set +x 00:12:52.559 ************************************ 00:12:52.559 START TEST ublk 00:12:52.559 ************************************ 00:12:52.559 14:13:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:12:52.559 * Looking for test storage... 00:12:52.559 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:12:52.559 14:13:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:52.559 14:13:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:52.559 14:13:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:52.559 14:13:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:52.559 14:13:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:52.559 14:13:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:52.559 14:13:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:52.559 14:13:50 -- scripts/common.sh@335 -- # IFS=.-: 00:12:52.559 14:13:50 -- scripts/common.sh@335 -- # read -ra ver1 00:12:52.559 14:13:50 -- scripts/common.sh@336 -- # IFS=.-: 00:12:52.559 14:13:50 -- scripts/common.sh@336 -- # read -ra ver2 00:12:52.559 14:13:50 -- scripts/common.sh@337 -- # local 'op=<' 00:12:52.559 14:13:50 -- scripts/common.sh@339 -- # ver1_l=2 00:12:52.559 14:13:50 -- scripts/common.sh@340 -- # ver2_l=1 00:12:52.559 14:13:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:52.559 14:13:50 -- scripts/common.sh@343 -- # case "$op" in 00:12:52.559 14:13:50 -- scripts/common.sh@344 -- # : 1 00:12:52.559 14:13:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:52.559 14:13:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:52.559 14:13:50 -- scripts/common.sh@364 -- # decimal 1 00:12:52.559 14:13:50 -- scripts/common.sh@352 -- # local d=1 00:12:52.559 14:13:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:52.559 14:13:50 -- scripts/common.sh@354 -- # echo 1 00:12:52.559 14:13:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:52.559 14:13:50 -- scripts/common.sh@365 -- # decimal 2 00:12:52.559 14:13:50 -- scripts/common.sh@352 -- # local d=2 00:12:52.559 14:13:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:52.559 14:13:50 -- scripts/common.sh@354 -- # echo 2 00:12:52.559 14:13:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:52.559 14:13:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:52.559 14:13:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:52.559 14:13:50 -- scripts/common.sh@367 -- # return 0 00:12:52.559 14:13:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:52.559 14:13:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:52.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.559 --rc genhtml_branch_coverage=1 00:12:52.559 --rc genhtml_function_coverage=1 00:12:52.559 --rc genhtml_legend=1 00:12:52.559 --rc geninfo_all_blocks=1 00:12:52.559 --rc geninfo_unexecuted_blocks=1 00:12:52.559 00:12:52.559 ' 00:12:52.559 14:13:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:52.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.559 --rc genhtml_branch_coverage=1 00:12:52.559 --rc genhtml_function_coverage=1 00:12:52.559 --rc genhtml_legend=1 00:12:52.559 --rc geninfo_all_blocks=1 00:12:52.559 --rc geninfo_unexecuted_blocks=1 00:12:52.559 00:12:52.559 ' 00:12:52.559 14:13:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:52.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.559 --rc genhtml_branch_coverage=1 00:12:52.559 --rc genhtml_function_coverage=1 00:12:52.559 --rc genhtml_legend=1 00:12:52.559 --rc geninfo_all_blocks=1 00:12:52.559 --rc geninfo_unexecuted_blocks=1 00:12:52.559 00:12:52.559 ' 00:12:52.559 14:13:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:52.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.559 --rc genhtml_branch_coverage=1 00:12:52.559 --rc genhtml_function_coverage=1 00:12:52.559 --rc genhtml_legend=1 00:12:52.559 --rc geninfo_all_blocks=1 00:12:52.559 --rc geninfo_unexecuted_blocks=1 00:12:52.559 00:12:52.559 ' 00:12:52.559 14:13:50 -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:12:52.559 14:13:50 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:12:52.559 14:13:50 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:12:52.559 14:13:50 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:12:52.559 14:13:50 -- lvol/common.sh@9 -- # AIO_BS=4096 00:12:52.559 14:13:50 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:12:52.559 14:13:50 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:12:52.559 14:13:50 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:12:52.559 14:13:50 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:12:52.559 14:13:50 -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:12:52.559 14:13:50 -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:12:52.559 14:13:50 -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:12:52.559 14:13:50 -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:12:52.559 14:13:50 -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:12:52.559 14:13:50 -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:12:52.559 14:13:50 -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:12:52.559 14:13:50 -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:12:52.559 14:13:50 -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:12:52.559 14:13:50 -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:12:52.559 14:13:50 -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:12:52.559 14:13:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:52.559 14:13:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:52.559 14:13:50 -- common/autotest_common.sh@10 -- # set +x 00:12:52.559 ************************************ 00:12:52.559 START TEST test_save_ublk_config 00:12:52.559 ************************************ 00:12:52.559 14:13:50 -- common/autotest_common.sh@1114 -- # test_save_config 00:12:52.559 14:13:50 -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:12:52.559 14:13:50 -- ublk/ublk.sh@103 -- # tgtpid=69071 00:12:52.559 14:13:50 -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:12:52.559 14:13:50 -- ublk/ublk.sh@106 -- # waitforlisten 69071 00:12:52.559 14:13:50 -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:12:52.559 14:13:50 -- common/autotest_common.sh@829 -- # '[' -z 69071 ']' 00:12:52.559 14:13:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.559 14:13:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:52.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.560 14:13:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.560 14:13:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:52.560 14:13:50 -- common/autotest_common.sh@10 -- # set +x 00:12:52.560 [2024-11-19 14:13:51.001772] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:52.560 [2024-11-19 14:13:51.001935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69071 ] 00:12:52.820 [2024-11-19 14:13:51.154375] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.082 [2024-11-19 14:13:51.392730] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:53.082 [2024-11-19 14:13:51.392986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.025 14:13:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:54.025 14:13:52 -- common/autotest_common.sh@862 -- # return 0 00:12:54.025 14:13:52 -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:12:54.025 14:13:52 -- ublk/ublk.sh@108 -- # rpc_cmd 00:12:54.025 14:13:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.025 14:13:52 -- common/autotest_common.sh@10 -- # set +x 00:12:54.025 [2024-11-19 14:13:52.529743] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:12:54.287 malloc0 00:12:54.287 [2024-11-19 14:13:52.601034] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:12:54.287 [2024-11-19 14:13:52.601131] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:12:54.287 [2024-11-19 14:13:52.601140] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:12:54.287 [2024-11-19 14:13:52.601149] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:12:54.287 [2024-11-19 14:13:52.610001] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:54.287 [2024-11-19 14:13:52.610041] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:54.287 [2024-11-19 14:13:52.616913] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:54.287 [2024-11-19 14:13:52.617049] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:12:54.287 [2024-11-19 14:13:52.633908] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:12:54.287 0 00:12:54.287 14:13:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.287 14:13:52 -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:12:54.287 14:13:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.287 14:13:52 -- common/autotest_common.sh@10 -- # set +x 00:12:54.549 14:13:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.549 14:13:52 -- ublk/ublk.sh@115 -- # config='{ 00:12:54.549 "subsystems": [ 00:12:54.549 { 00:12:54.549 "subsystem": "iobuf", 00:12:54.549 "config": [ 00:12:54.549 { 00:12:54.549 "method": "iobuf_set_options", 00:12:54.549 "params": { 00:12:54.549 "small_pool_count": 8192, 00:12:54.549 "large_pool_count": 1024, 00:12:54.549 "small_bufsize": 8192, 00:12:54.549 "large_bufsize": 135168 00:12:54.549 } 00:12:54.549 } 00:12:54.549 ] 00:12:54.549 }, 00:12:54.549 { 00:12:54.549 "subsystem": "sock", 00:12:54.549 "config": [ 00:12:54.549 { 00:12:54.549 "method": "sock_impl_set_options", 00:12:54.549 "params": { 00:12:54.549 "impl_name": "posix", 00:12:54.549 "recv_buf_size": 2097152, 00:12:54.549 "send_buf_size": 2097152, 00:12:54.549 "enable_recv_pipe": true, 00:12:54.549 "enable_quickack": false, 00:12:54.549 "enable_placement_id": 0, 00:12:54.549 "enable_zerocopy_send_server": true, 00:12:54.549 "enable_zerocopy_send_client": false, 00:12:54.549 "zerocopy_threshold": 0, 00:12:54.549 "tls_version": 0, 00:12:54.549 "enable_ktls": false 00:12:54.549 } 00:12:54.549 }, 00:12:54.549 { 00:12:54.549 "method": "sock_impl_set_options", 00:12:54.549 "params": { 00:12:54.549 "impl_name": "ssl", 00:12:54.549 "recv_buf_size": 4096, 00:12:54.549 "send_buf_size": 4096, 00:12:54.549 "enable_recv_pipe": true, 00:12:54.549 "enable_quickack": false, 00:12:54.549 "enable_placement_id": 0, 00:12:54.549 "enable_zerocopy_send_server": true, 00:12:54.549 "enable_zerocopy_send_client": false, 00:12:54.549 "zerocopy_threshold": 0, 00:12:54.549 "tls_version": 0, 00:12:54.549 "enable_ktls": false 00:12:54.549 } 00:12:54.549 } 00:12:54.549 ] 00:12:54.549 }, 00:12:54.549 { 00:12:54.549 "subsystem": "vmd", 00:12:54.549 "config": [] 00:12:54.549 }, 00:12:54.549 { 00:12:54.549 "subsystem": "accel", 00:12:54.549 "config": [ 00:12:54.549 { 00:12:54.549 "method": "accel_set_options", 00:12:54.549 "params": { 00:12:54.549 "small_cache_size": 128, 00:12:54.549 "large_cache_size": 16, 00:12:54.549 "task_count": 2048, 00:12:54.549 "sequence_count": 2048, 00:12:54.549 "buf_count": 2048 00:12:54.549 } 00:12:54.549 } 00:12:54.549 ] 00:12:54.549 }, 00:12:54.549 { 00:12:54.549 "subsystem": "bdev", 00:12:54.549 "config": [ 00:12:54.549 { 00:12:54.549 "method": "bdev_set_options", 00:12:54.549 "params": { 00:12:54.549 "bdev_io_pool_size": 65535, 00:12:54.549 "bdev_io_cache_size": 256, 00:12:54.549 "bdev_auto_examine": true, 00:12:54.549 "iobuf_small_cache_size": 128, 00:12:54.549 "iobuf_large_cache_size": 16 00:12:54.549 } 00:12:54.549 }, 00:12:54.549 { 00:12:54.549 "method": "bdev_raid_set_options", 00:12:54.549 "params": { 00:12:54.549 "process_window_size_kb": 1024 00:12:54.549 } 00:12:54.549 }, 00:12:54.549 { 00:12:54.549 "method": "bdev_iscsi_set_options", 00:12:54.549 "params": { 00:12:54.549 "timeout_sec": 30 00:12:54.549 } 00:12:54.549 }, 00:12:54.549 { 00:12:54.549 "method": "bdev_nvme_set_options", 00:12:54.549 "params": { 00:12:54.549 "action_on_timeout": "none", 00:12:54.549 "timeout_us": 0, 00:12:54.549 "timeout_admin_us": 0, 00:12:54.549 "keep_alive_timeout_ms": 10000, 00:12:54.549 "transport_retry_count": 4, 00:12:54.549 "arbitration_burst": 0, 00:12:54.549 "low_priority_weight": 0, 00:12:54.549 "medium_priority_weight": 0, 00:12:54.549 "high_priority_weight": 0, 00:12:54.549 "nvme_adminq_poll_period_us": 10000, 00:12:54.549 "nvme_ioq_poll_period_us": 0, 00:12:54.549 "io_queue_requests": 0, 00:12:54.549 "delay_cmd_submit": true, 00:12:54.549 "bdev_retry_count": 3, 00:12:54.550 "transport_ack_timeout": 0, 00:12:54.550 "ctrlr_loss_timeout_sec": 0, 00:12:54.550 "reconnect_delay_sec": 0, 00:12:54.550 "fast_io_fail_timeout_sec": 0, 00:12:54.550 "generate_uuids": false, 00:12:54.550 "transport_tos": 0, 00:12:54.550 "io_path_stat": false, 00:12:54.550 "allow_accel_sequence": false 00:12:54.550 } 00:12:54.550 }, 00:12:54.550 { 00:12:54.550 "method": "bdev_nvme_set_hotplug", 00:12:54.550 "params": { 00:12:54.550 "period_us": 100000, 00:12:54.550 "enable": false 00:12:54.550 } 00:12:54.550 }, 00:12:54.550 { 00:12:54.550 "method": "bdev_malloc_create", 00:12:54.550 "params": { 00:12:54.550 "name": "malloc0", 00:12:54.550 "num_blocks": 8192, 00:12:54.550 "block_size": 4096, 00:12:54.550 "physical_block_size": 4096, 00:12:54.550 "uuid": "d48add5d-7782-4c45-b3b3-18ce6ac95df0", 00:12:54.550 "optimal_io_boundary": 0 00:12:54.550 } 00:12:54.550 }, 00:12:54.550 { 00:12:54.550 "method": "bdev_wait_for_examine" 00:12:54.550 } 00:12:54.550 ] 00:12:54.550 }, 00:12:54.550 { 00:12:54.550 "subsystem": "scsi", 00:12:54.550 "config": null 00:12:54.550 }, 00:12:54.550 { 00:12:54.550 "subsystem": "scheduler", 00:12:54.550 "config": [ 00:12:54.550 { 00:12:54.550 "method": "framework_set_scheduler", 00:12:54.550 "params": { 00:12:54.550 "name": "static" 00:12:54.550 } 00:12:54.550 } 00:12:54.550 ] 00:12:54.550 }, 00:12:54.550 { 00:12:54.550 "subsystem": "vhost_scsi", 00:12:54.550 "config": [] 00:12:54.550 }, 00:12:54.550 { 00:12:54.550 "subsystem": "vhost_blk", 00:12:54.550 "config": [] 00:12:54.550 }, 00:12:54.550 { 00:12:54.550 "subsystem": "ublk", 00:12:54.550 "config": [ 00:12:54.550 { 00:12:54.550 "method": "ublk_create_target", 00:12:54.550 "params": { 00:12:54.550 "cpumask": "1" 00:12:54.550 } 00:12:54.550 }, 00:12:54.550 { 00:12:54.550 "method": "ublk_start_disk", 00:12:54.550 "params": { 00:12:54.550 "bdev_name": "malloc0", 00:12:54.550 "ublk_id": 0, 00:12:54.550 "num_queues": 1, 00:12:54.550 "queue_depth": 128 00:12:54.550 } 00:12:54.550 } 00:12:54.550 ] 00:12:54.550 }, 00:12:54.550 { 00:12:54.550 "subsystem": "nbd", 00:12:54.550 "config": [] 00:12:54.550 }, 00:12:54.550 { 00:12:54.550 "subsystem": "nvmf", 00:12:54.550 "config": [ 00:12:54.550 { 00:12:54.550 "method": "nvmf_set_config", 00:12:54.550 "params": { 00:12:54.550 "discovery_filter": "match_any", 00:12:54.550 "admin_cmd_passthru": { 00:12:54.550 "identify_ctrlr": false 00:12:54.550 } 00:12:54.550 } 00:12:54.550 }, 00:12:54.550 { 00:12:54.550 "method": "nvmf_set_max_subsystems", 00:12:54.550 "params": { 00:12:54.550 "max_subsystems": 1024 00:12:54.550 } 00:12:54.550 }, 00:12:54.550 { 00:12:54.550 "method": "nvmf_set_crdt", 00:12:54.550 "params": { 00:12:54.550 "crdt1": 0, 00:12:54.550 "crdt2": 0, 00:12:54.550 "crdt3": 0 00:12:54.550 } 00:12:54.550 } 00:12:54.550 ] 00:12:54.550 }, 00:12:54.550 { 00:12:54.550 "subsystem": "iscsi", 00:12:54.550 "config": [ 00:12:54.550 { 00:12:54.550 "method": "iscsi_set_options", 00:12:54.550 "params": { 00:12:54.550 "node_base": "iqn.2016-06.io.spdk", 00:12:54.550 "max_sessions": 128, 00:12:54.550 "max_connections_per_session": 2, 00:12:54.550 "max_queue_depth": 64, 00:12:54.550 "default_time2wait": 2, 00:12:54.550 "default_time2retain": 20, 00:12:54.550 "first_burst_length": 8192, 00:12:54.550 "immediate_data": true, 00:12:54.550 "allow_duplicated_isid": false, 00:12:54.550 "error_recovery_level": 0, 00:12:54.550 "nop_timeout": 60, 00:12:54.550 "nop_in_interval": 30, 00:12:54.550 "disable_chap": false, 00:12:54.550 "require_chap": false, 00:12:54.550 "mutual_chap": false, 00:12:54.550 "chap_group": 0, 00:12:54.550 "max_large_datain_per_connection": 64, 00:12:54.550 "max_r2t_per_connection": 4, 00:12:54.550 "pdu_pool_size": 36864, 00:12:54.550 "immediate_data_pool_size": 16384, 00:12:54.550 "data_out_pool_size": 2048 00:12:54.550 } 00:12:54.550 } 00:12:54.550 ] 00:12:54.550 } 00:12:54.550 ] 00:12:54.550 }' 00:12:54.550 14:13:52 -- ublk/ublk.sh@116 -- # killprocess 69071 00:12:54.550 14:13:52 -- common/autotest_common.sh@936 -- # '[' -z 69071 ']' 00:12:54.550 14:13:52 -- common/autotest_common.sh@940 -- # kill -0 69071 00:12:54.550 14:13:52 -- common/autotest_common.sh@941 -- # uname 00:12:54.550 14:13:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:54.550 14:13:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69071 00:12:54.550 14:13:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:54.550 killing process with pid 69071 00:12:54.550 14:13:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:54.550 14:13:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69071' 00:12:54.550 14:13:52 -- common/autotest_common.sh@955 -- # kill 69071 00:12:54.550 14:13:52 -- common/autotest_common.sh@960 -- # wait 69071 00:12:55.496 [2024-11-19 14:13:53.954594] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:12:55.496 [2024-11-19 14:13:53.979957] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:55.496 [2024-11-19 14:13:53.980054] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:12:55.496 [2024-11-19 14:13:53.987901] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:55.496 [2024-11-19 14:13:53.987945] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:12:55.496 [2024-11-19 14:13:53.987955] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:12:55.496 [2024-11-19 14:13:53.988016] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:12:55.496 [2024-11-19 14:13:53.988121] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:12:56.883 14:13:55 -- ublk/ublk.sh@119 -- # tgtpid=69133 00:12:56.883 14:13:55 -- ublk/ublk.sh@121 -- # waitforlisten 69133 00:12:56.883 14:13:55 -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:12:56.883 14:13:55 -- common/autotest_common.sh@829 -- # '[' -z 69133 ']' 00:12:56.883 14:13:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.883 14:13:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:56.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.883 14:13:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.883 14:13:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:56.883 14:13:55 -- common/autotest_common.sh@10 -- # set +x 00:12:56.883 14:13:55 -- ublk/ublk.sh@118 -- # echo '{ 00:12:56.883 "subsystems": [ 00:12:56.883 { 00:12:56.883 "subsystem": "iobuf", 00:12:56.883 "config": [ 00:12:56.883 { 00:12:56.883 "method": "iobuf_set_options", 00:12:56.883 "params": { 00:12:56.883 "small_pool_count": 8192, 00:12:56.883 "large_pool_count": 1024, 00:12:56.883 "small_bufsize": 8192, 00:12:56.883 "large_bufsize": 135168 00:12:56.883 } 00:12:56.883 } 00:12:56.883 ] 00:12:56.883 }, 00:12:56.883 { 00:12:56.883 "subsystem": "sock", 00:12:56.883 "config": [ 00:12:56.883 { 00:12:56.883 "method": "sock_impl_set_options", 00:12:56.883 "params": { 00:12:56.883 "impl_name": "posix", 00:12:56.883 "recv_buf_size": 2097152, 00:12:56.883 "send_buf_size": 2097152, 00:12:56.883 "enable_recv_pipe": true, 00:12:56.883 "enable_quickack": false, 00:12:56.883 "enable_placement_id": 0, 00:12:56.883 "enable_zerocopy_send_server": true, 00:12:56.883 "enable_zerocopy_send_client": false, 00:12:56.883 "zerocopy_threshold": 0, 00:12:56.883 "tls_version": 0, 00:12:56.883 "enable_ktls": false 00:12:56.883 } 00:12:56.883 }, 00:12:56.883 { 00:12:56.883 "method": "sock_impl_set_options", 00:12:56.883 "params": { 00:12:56.883 "impl_name": "ssl", 00:12:56.883 "recv_buf_size": 4096, 00:12:56.883 "send_buf_size": 4096, 00:12:56.883 "enable_recv_pipe": true, 00:12:56.883 "enable_quickack": false, 00:12:56.883 "enable_placement_id": 0, 00:12:56.883 "enable_zerocopy_send_server": true, 00:12:56.883 "enable_zerocopy_send_client": false, 00:12:56.883 "zerocopy_threshold": 0, 00:12:56.883 "tls_version": 0, 00:12:56.883 "enable_ktls": false 00:12:56.883 } 00:12:56.883 } 00:12:56.883 ] 00:12:56.883 }, 00:12:56.883 { 00:12:56.883 "subsystem": "vmd", 00:12:56.883 "config": [] 00:12:56.883 }, 00:12:56.883 { 00:12:56.883 "subsystem": "accel", 00:12:56.883 "config": [ 00:12:56.883 { 00:12:56.883 "method": "accel_set_options", 00:12:56.883 "params": { 00:12:56.883 "small_cache_size": 128, 00:12:56.883 "large_cache_size": 16, 00:12:56.883 "task_count": 2048, 00:12:56.883 "sequence_count": 2048, 00:12:56.883 "buf_count": 2048 00:12:56.883 } 00:12:56.883 } 00:12:56.883 ] 00:12:56.883 }, 00:12:56.883 { 00:12:56.883 "subsystem": "bdev", 00:12:56.883 "config": [ 00:12:56.883 { 00:12:56.883 "method": "bdev_set_options", 00:12:56.883 "params": { 00:12:56.883 "bdev_io_pool_size": 65535, 00:12:56.883 "bdev_io_cache_size": 256, 00:12:56.883 "bdev_auto_examine": true, 00:12:56.883 "iobuf_small_cache_size": 128, 00:12:56.883 "iobuf_large_cache_size": 16 00:12:56.883 } 00:12:56.883 }, 00:12:56.883 { 00:12:56.883 "method": "bdev_raid_set_options", 00:12:56.883 "params": { 00:12:56.883 "process_window_size_kb": 1024 00:12:56.883 } 00:12:56.883 }, 00:12:56.883 { 00:12:56.883 "method": "bdev_iscsi_set_options", 00:12:56.883 "params": { 00:12:56.883 "timeout_sec": 30 00:12:56.883 } 00:12:56.883 }, 00:12:56.883 { 00:12:56.883 "method": "bdev_nvme_set_options", 00:12:56.883 "params": { 00:12:56.883 "action_on_timeout": "none", 00:12:56.883 "timeout_us": 0, 00:12:56.883 "timeout_admin_us": 0, 00:12:56.883 "keep_alive_timeout_ms": 10000, 00:12:56.883 "transport_retry_count": 4, 00:12:56.883 "arbitration_burst": 0, 00:12:56.883 "low_priority_weight": 0, 00:12:56.883 "medium_priority_weight": 0, 00:12:56.883 "high_priority_weight": 0, 00:12:56.883 "nvme_adminq_poll_period_us": 10000, 00:12:56.883 "nvme_ioq_poll_period_us": 0, 00:12:56.883 "io_queue_requests": 0, 00:12:56.883 "delay_cmd_submit": true, 00:12:56.883 "bdev_retry_count": 3, 00:12:56.883 "transport_ack_timeout": 0, 00:12:56.884 "ctrlr_loss_timeout_sec": 0, 00:12:56.884 "reconnect_delay_sec": 0, 00:12:56.884 "fast_io_fail_timeout_sec": 0, 00:12:56.884 "generate_uuids": false, 00:12:56.884 "transport_tos": 0, 00:12:56.884 "io_path_stat": false, 00:12:56.884 "allow_accel_sequence": false 00:12:56.884 } 00:12:56.884 }, 00:12:56.884 { 00:12:56.884 "method": "bdev_nvme_set_hotplug", 00:12:56.884 "params": { 00:12:56.884 "period_us": 100000, 00:12:56.884 "enable": false 00:12:56.884 } 00:12:56.884 }, 00:12:56.884 { 00:12:56.884 "method": "bdev_malloc_create", 00:12:56.884 "params": { 00:12:56.884 "name": "malloc0", 00:12:56.884 "num_blocks": 8192, 00:12:56.884 "block_size": 4096, 00:12:56.884 "physical_block_size": 4096, 00:12:56.884 "uuid": "d48add5d-7782-4c45-b3b3-18ce6ac95df0", 00:12:56.884 "optimal_io_boundary": 0 00:12:56.884 } 00:12:56.884 }, 00:12:56.884 { 00:12:56.884 "method": "bdev_wait_for_examine" 00:12:56.884 } 00:12:56.884 ] 00:12:56.884 }, 00:12:56.884 { 00:12:56.884 "subsystem": "scsi", 00:12:56.884 "config": null 00:12:56.884 }, 00:12:56.884 { 00:12:56.884 "subsystem": "scheduler", 00:12:56.884 "config": [ 00:12:56.884 { 00:12:56.884 "method": "framework_set_scheduler", 00:12:56.884 "params": { 00:12:56.884 "name": "static" 00:12:56.884 } 00:12:56.884 } 00:12:56.884 ] 00:12:56.884 }, 00:12:56.884 { 00:12:56.884 "subsystem": "vhost_scsi", 00:12:56.884 "config": [] 00:12:56.884 }, 00:12:56.884 { 00:12:56.884 "subsystem": "vhost_blk", 00:12:56.884 "config": [] 00:12:56.884 }, 00:12:56.884 { 00:12:56.884 "subsystem": "ublk", 00:12:56.884 "config": [ 00:12:56.884 { 00:12:56.884 "method": "ublk_create_target", 00:12:56.884 "params": { 00:12:56.884 "cpumask": "1" 00:12:56.884 } 00:12:56.884 }, 00:12:56.884 { 00:12:56.884 "method": "ublk_start_disk", 00:12:56.884 "params": { 00:12:56.884 "bdev_name": "malloc0", 00:12:56.884 "ublk_id": 0, 00:12:56.884 "num_queues": 1, 00:12:56.884 "queue_depth": 128 00:12:56.884 } 00:12:56.884 } 00:12:56.884 ] 00:12:56.884 }, 00:12:56.884 { 00:12:56.884 "subsystem": "nbd", 00:12:56.884 "config": [] 00:12:56.884 }, 00:12:56.884 { 00:12:56.884 "subsystem": "nvmf", 00:12:56.884 "config": [ 00:12:56.884 { 00:12:56.884 "method": "nvmf_set_config", 00:12:56.884 "params": { 00:12:56.884 "discovery_filter": "match_any", 00:12:56.884 "admin_cmd_passthru": { 00:12:56.884 "identify_ctrlr": false 00:12:56.884 } 00:12:56.884 } 00:12:56.884 }, 00:12:56.884 { 00:12:56.884 "method": "nvmf_set_max_subsystems", 00:12:56.884 "params": { 00:12:56.884 "max_subsystems": 1024 00:12:56.884 } 00:12:56.884 }, 00:12:56.884 { 00:12:56.884 "method": "nvmf_set_crdt", 00:12:56.884 "params": { 00:12:56.884 "crdt1": 0, 00:12:56.884 "crdt2": 0, 00:12:56.884 "crdt3": 0 00:12:56.884 } 00:12:56.884 } 00:12:56.884 ] 00:12:56.884 }, 00:12:56.884 { 00:12:56.884 "subsystem": "iscsi", 00:12:56.884 "config": [ 00:12:56.884 { 00:12:56.884 "method": "iscsi_set_options", 00:12:56.884 "params": { 00:12:56.884 "node_base": "iqn.2016-06.io.spdk", 00:12:56.884 "max_sessions": 128, 00:12:56.884 "max_connections_per_session": 2, 00:12:56.884 "max_queue_depth": 64, 00:12:56.884 "default_time2wait": 2, 00:12:56.884 "default_time2retain": 20, 00:12:56.884 "first_burst_length": 8192, 00:12:56.884 "immediate_data": true, 00:12:56.884 "allow_duplicated_isid": false, 00:12:56.884 "error_recovery_level": 0, 00:12:56.884 "nop_timeout": 60, 00:12:56.884 "nop_in_interval": 30, 00:12:56.884 "disable_chap": false, 00:12:56.884 "require_chap": false, 00:12:56.884 "mutual_chap": false, 00:12:56.884 "chap_group": 0, 00:12:56.884 "max_large_datain_per_connection": 64, 00:12:56.884 "max_r2t_per_connection": 4, 00:12:56.884 "pdu_pool_size": 36864, 00:12:56.884 "immediate_data_pool_size": 16384, 00:12:56.884 "data_out_pool_size": 2048 00:12:56.884 } 00:12:56.884 } 00:12:56.884 ] 00:12:56.884 } 00:12:56.884 ] 00:12:56.884 }' 00:12:56.884 [2024-11-19 14:13:55.256976] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:56.884 [2024-11-19 14:13:55.257092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69133 ] 00:12:56.884 [2024-11-19 14:13:55.407573] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.145 [2024-11-19 14:13:55.633900] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:57.145 [2024-11-19 14:13:55.634132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.143 [2024-11-19 14:13:56.424714] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:12:58.143 [2024-11-19 14:13:56.432036] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:12:58.143 [2024-11-19 14:13:56.432132] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:12:58.143 [2024-11-19 14:13:56.432140] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:12:58.143 [2024-11-19 14:13:56.432148] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:12:58.143 [2024-11-19 14:13:56.441003] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:58.143 [2024-11-19 14:13:56.441037] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:58.143 [2024-11-19 14:13:56.447916] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:58.143 [2024-11-19 14:13:56.448043] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:12:58.143 [2024-11-19 14:13:56.464908] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:12:58.404 14:13:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:58.404 14:13:56 -- common/autotest_common.sh@862 -- # return 0 00:12:58.404 14:13:56 -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:12:58.404 14:13:56 -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:12:58.404 14:13:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.404 14:13:56 -- common/autotest_common.sh@10 -- # set +x 00:12:58.404 14:13:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.404 14:13:56 -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:12:58.404 14:13:56 -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:12:58.404 14:13:56 -- ublk/ublk.sh@125 -- # killprocess 69133 00:12:58.404 14:13:56 -- common/autotest_common.sh@936 -- # '[' -z 69133 ']' 00:12:58.404 14:13:56 -- common/autotest_common.sh@940 -- # kill -0 69133 00:12:58.404 14:13:56 -- common/autotest_common.sh@941 -- # uname 00:12:58.404 14:13:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:58.404 14:13:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69133 00:12:58.404 14:13:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:58.404 killing process with pid 69133 00:12:58.404 14:13:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:58.404 14:13:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69133' 00:12:58.404 14:13:56 -- common/autotest_common.sh@955 -- # kill 69133 00:12:58.404 14:13:56 -- common/autotest_common.sh@960 -- # wait 69133 00:12:59.788 [2024-11-19 14:13:57.927286] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:12:59.788 [2024-11-19 14:13:57.959010] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:59.788 [2024-11-19 14:13:57.959151] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:12:59.788 [2024-11-19 14:13:57.966930] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:59.788 [2024-11-19 14:13:57.966993] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:12:59.788 [2024-11-19 14:13:57.967002] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:12:59.788 [2024-11-19 14:13:57.967031] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:12:59.788 [2024-11-19 14:13:57.967191] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:00.731 14:13:59 -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:00.731 00:13:00.731 real 0m8.361s 00:13:00.731 user 0m6.334s 00:13:00.731 sys 0m2.999s 00:13:00.731 14:13:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:00.731 ************************************ 00:13:00.731 END TEST test_save_ublk_config 00:13:00.731 ************************************ 00:13:00.731 14:13:59 -- common/autotest_common.sh@10 -- # set +x 00:13:00.993 14:13:59 -- ublk/ublk.sh@139 -- # spdk_pid=69212 00:13:00.993 14:13:59 -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:00.993 14:13:59 -- ublk/ublk.sh@141 -- # waitforlisten 69212 00:13:00.993 14:13:59 -- common/autotest_common.sh@829 -- # '[' -z 69212 ']' 00:13:00.993 14:13:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.993 14:13:59 -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:00.993 14:13:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:00.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.993 14:13:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.993 14:13:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:00.993 14:13:59 -- common/autotest_common.sh@10 -- # set +x 00:13:00.993 [2024-11-19 14:13:59.385975] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:00.993 [2024-11-19 14:13:59.386092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69212 ] 00:13:00.993 [2024-11-19 14:13:59.531435] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:01.251 [2024-11-19 14:13:59.701174] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:01.251 [2024-11-19 14:13:59.701673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:01.251 [2024-11-19 14:13:59.701727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.627 14:14:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:02.627 14:14:00 -- common/autotest_common.sh@862 -- # return 0 00:13:02.627 14:14:00 -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:02.627 14:14:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:02.627 14:14:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:02.627 14:14:00 -- common/autotest_common.sh@10 -- # set +x 00:13:02.627 ************************************ 00:13:02.627 START TEST test_create_ublk 00:13:02.627 ************************************ 00:13:02.627 14:14:00 -- common/autotest_common.sh@1114 -- # test_create_ublk 00:13:02.627 14:14:00 -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:02.627 14:14:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.627 14:14:00 -- common/autotest_common.sh@10 -- # set +x 00:13:02.627 [2024-11-19 14:14:00.896557] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:02.627 14:14:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.627 14:14:00 -- ublk/ublk.sh@33 -- # ublk_target= 00:13:02.627 14:14:00 -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:02.627 14:14:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.627 14:14:00 -- common/autotest_common.sh@10 -- # set +x 00:13:02.627 14:14:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.627 14:14:01 -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:02.627 14:14:01 -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:02.627 14:14:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.627 14:14:01 -- common/autotest_common.sh@10 -- # set +x 00:13:02.627 [2024-11-19 14:14:01.076014] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:02.627 [2024-11-19 14:14:01.076360] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:02.627 [2024-11-19 14:14:01.076371] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:02.627 [2024-11-19 14:14:01.076379] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:02.627 [2024-11-19 14:14:01.083912] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:02.627 [2024-11-19 14:14:01.083932] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:02.627 [2024-11-19 14:14:01.091899] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:02.627 [2024-11-19 14:14:01.106064] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:02.627 [2024-11-19 14:14:01.126912] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:02.627 14:14:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.627 14:14:01 -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:02.627 14:14:01 -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:02.627 14:14:01 -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:02.627 14:14:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.627 14:14:01 -- common/autotest_common.sh@10 -- # set +x 00:13:02.627 14:14:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.627 14:14:01 -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:02.627 { 00:13:02.627 "ublk_device": "/dev/ublkb0", 00:13:02.627 "id": 0, 00:13:02.627 "queue_depth": 512, 00:13:02.627 "num_queues": 4, 00:13:02.627 "bdev_name": "Malloc0" 00:13:02.627 } 00:13:02.627 ]' 00:13:02.627 14:14:01 -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:02.627 14:14:01 -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:02.627 14:14:01 -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:02.885 14:14:01 -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:02.885 14:14:01 -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:02.885 14:14:01 -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:02.885 14:14:01 -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:02.885 14:14:01 -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:02.885 14:14:01 -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:02.885 14:14:01 -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:02.885 14:14:01 -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:02.885 14:14:01 -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:02.885 14:14:01 -- lvol/common.sh@41 -- # local offset=0 00:13:02.885 14:14:01 -- lvol/common.sh@42 -- # local size=134217728 00:13:02.885 14:14:01 -- lvol/common.sh@43 -- # local rw=write 00:13:02.885 14:14:01 -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:02.885 14:14:01 -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:02.885 14:14:01 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:02.885 14:14:01 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:02.885 14:14:01 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:02.885 14:14:01 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:02.885 14:14:01 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:02.885 fio: verification read phase will never start because write phase uses all of runtime 00:13:02.885 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:02.885 fio-3.35 00:13:02.885 Starting 1 process 00:13:15.082 00:13:15.083 fio_test: (groupid=0, jobs=1): err= 0: pid=69262: Tue Nov 19 14:14:11 2024 00:13:15.083 write: IOPS=14.0k, BW=54.6MiB/s (57.3MB/s)(546MiB/10001msec); 0 zone resets 00:13:15.083 clat (usec): min=39, max=8011, avg=70.74, stdev=149.73 00:13:15.083 lat (usec): min=40, max=8012, avg=71.18, stdev=149.74 00:13:15.083 clat percentiles (usec): 00:13:15.083 | 1.00th=[ 53], 5.00th=[ 55], 10.00th=[ 56], 20.00th=[ 58], 00:13:15.083 | 30.00th=[ 60], 40.00th=[ 62], 50.00th=[ 64], 60.00th=[ 65], 00:13:15.083 | 70.00th=[ 67], 80.00th=[ 69], 90.00th=[ 72], 95.00th=[ 75], 00:13:15.083 | 99.00th=[ 85], 99.50th=[ 105], 99.90th=[ 3294], 99.95th=[ 3589], 00:13:15.083 | 99.99th=[ 3949] 00:13:15.083 bw ( KiB/s): min=25384, max=63032, per=99.78%, avg=55816.42, stdev=10313.61, samples=19 00:13:15.083 iops : min= 6346, max=15758, avg=13954.11, stdev=2578.40, samples=19 00:13:15.083 lat (usec) : 50=0.05%, 100=99.43%, 250=0.20%, 500=0.03%, 750=0.01% 00:13:15.083 lat (usec) : 1000=0.01% 00:13:15.083 lat (msec) : 2=0.06%, 4=0.19%, 10=0.01% 00:13:15.083 cpu : usr=1.97%, sys=12.20%, ctx=139867, majf=0, minf=798 00:13:15.083 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:15.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.083 issued rwts: total=0,139864,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:15.083 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:15.083 00:13:15.083 Run status group 0 (all jobs): 00:13:15.083 WRITE: bw=54.6MiB/s (57.3MB/s), 54.6MiB/s-54.6MiB/s (57.3MB/s-57.3MB/s), io=546MiB (573MB), run=10001-10001msec 00:13:15.083 00:13:15.083 Disk stats (read/write): 00:13:15.083 ublkb0: ios=0/138288, merge=0/0, ticks=0/8371, in_queue=8371, util=99.06% 00:13:15.083 14:14:11 -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:13:15.083 14:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.083 14:14:11 -- common/autotest_common.sh@10 -- # set +x 00:13:15.083 [2024-11-19 14:14:11.536026] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:15.083 [2024-11-19 14:14:11.589899] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:15.083 [2024-11-19 14:14:11.590605] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:15.083 [2024-11-19 14:14:11.597902] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:15.083 [2024-11-19 14:14:11.598139] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:15.083 [2024-11-19 14:14:11.598154] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:15.083 14:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.083 14:14:11 -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:13:15.083 14:14:11 -- common/autotest_common.sh@650 -- # local es=0 00:13:15.083 14:14:11 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:13:15.083 14:14:11 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:15.083 14:14:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:15.083 14:14:11 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:15.083 14:14:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:15.083 14:14:11 -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:13:15.083 14:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.083 14:14:11 -- common/autotest_common.sh@10 -- # set +x 00:13:15.083 [2024-11-19 14:14:11.613980] ublk.c:1049:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:13:15.083 request: 00:13:15.083 { 00:13:15.083 "ublk_id": 0, 00:13:15.083 "method": "ublk_stop_disk", 00:13:15.083 "req_id": 1 00:13:15.083 } 00:13:15.083 Got JSON-RPC error response 00:13:15.083 response: 00:13:15.083 { 00:13:15.083 "code": -19, 00:13:15.083 "message": "No such device" 00:13:15.083 } 00:13:15.083 14:14:11 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:15.083 14:14:11 -- common/autotest_common.sh@653 -- # es=1 00:13:15.083 14:14:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:15.083 14:14:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:15.083 14:14:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:15.083 14:14:11 -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:13:15.083 14:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.083 14:14:11 -- common/autotest_common.sh@10 -- # set +x 00:13:15.083 [2024-11-19 14:14:11.629949] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:15.083 [2024-11-19 14:14:11.633807] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:15.083 [2024-11-19 14:14:11.633836] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:15.083 14:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.083 14:14:11 -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:15.083 14:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.083 14:14:11 -- common/autotest_common.sh@10 -- # set +x 00:13:15.083 14:14:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.083 14:14:12 -- ublk/ublk.sh@57 -- # check_leftover_devices 00:13:15.083 14:14:12 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:15.083 14:14:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.083 14:14:12 -- common/autotest_common.sh@10 -- # set +x 00:13:15.083 14:14:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.083 14:14:12 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:15.083 14:14:12 -- lvol/common.sh@26 -- # jq length 00:13:15.083 14:14:12 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:15.083 14:14:12 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:15.083 14:14:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.083 14:14:12 -- common/autotest_common.sh@10 -- # set +x 00:13:15.083 14:14:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.083 14:14:12 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:15.083 14:14:12 -- lvol/common.sh@28 -- # jq length 00:13:15.083 14:14:12 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:15.083 00:13:15.083 real 0m11.213s 00:13:15.083 user 0m0.494s 00:13:15.083 sys 0m1.295s 00:13:15.083 14:14:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:15.083 14:14:12 -- common/autotest_common.sh@10 -- # set +x 00:13:15.083 ************************************ 00:13:15.083 END TEST test_create_ublk 00:13:15.083 ************************************ 00:13:15.083 14:14:12 -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:13:15.083 14:14:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:15.083 14:14:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:15.083 14:14:12 -- common/autotest_common.sh@10 -- # set +x 00:13:15.083 ************************************ 00:13:15.083 START TEST test_create_multi_ublk 00:13:15.083 ************************************ 00:13:15.083 14:14:12 -- common/autotest_common.sh@1114 -- # test_create_multi_ublk 00:13:15.083 14:14:12 -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:13:15.083 14:14:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.083 14:14:12 -- common/autotest_common.sh@10 -- # set +x 00:13:15.083 [2024-11-19 14:14:12.148515] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:15.083 14:14:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.083 14:14:12 -- ublk/ublk.sh@62 -- # ublk_target= 00:13:15.083 14:14:12 -- ublk/ublk.sh@64 -- # seq 0 3 00:13:15.083 14:14:12 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:15.083 14:14:12 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:13:15.083 14:14:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.083 14:14:12 -- common/autotest_common.sh@10 -- # set +x 00:13:15.083 14:14:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.083 14:14:12 -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:13:15.083 14:14:12 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:15.083 14:14:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.083 14:14:12 -- common/autotest_common.sh@10 -- # set +x 00:13:15.083 [2024-11-19 14:14:12.387006] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:15.083 [2024-11-19 14:14:12.387352] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:15.083 [2024-11-19 14:14:12.387364] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:15.083 [2024-11-19 14:14:12.387371] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:15.083 [2024-11-19 14:14:12.410901] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:15.083 [2024-11-19 14:14:12.410924] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:15.083 [2024-11-19 14:14:12.422906] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:15.083 [2024-11-19 14:14:12.423449] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:15.083 [2024-11-19 14:14:12.458908] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:15.083 14:14:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.083 14:14:12 -- ublk/ublk.sh@68 -- # ublk_id=0 00:13:15.083 14:14:12 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:15.083 14:14:12 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:13:15.083 14:14:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.083 14:14:12 -- common/autotest_common.sh@10 -- # set +x 00:13:15.083 14:14:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.083 14:14:12 -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:13:15.083 14:14:12 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:13:15.083 14:14:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.083 14:14:12 -- common/autotest_common.sh@10 -- # set +x 00:13:15.083 [2024-11-19 14:14:12.686990] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:13:15.083 [2024-11-19 14:14:12.687337] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:13:15.083 [2024-11-19 14:14:12.687350] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:15.083 [2024-11-19 14:14:12.687355] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:15.083 [2024-11-19 14:14:12.694916] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:15.083 [2024-11-19 14:14:12.694932] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:15.083 [2024-11-19 14:14:12.702900] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:15.083 [2024-11-19 14:14:12.703441] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:15.084 [2024-11-19 14:14:12.719903] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:15.084 14:14:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.084 14:14:12 -- ublk/ublk.sh@68 -- # ublk_id=1 00:13:15.084 14:14:12 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:15.084 14:14:12 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:13:15.084 14:14:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.084 14:14:12 -- common/autotest_common.sh@10 -- # set +x 00:13:15.084 14:14:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.084 14:14:12 -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:13:15.084 14:14:12 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:13:15.084 14:14:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.084 14:14:12 -- common/autotest_common.sh@10 -- # set +x 00:13:15.084 [2024-11-19 14:14:12.903014] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:13:15.084 [2024-11-19 14:14:12.903354] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:13:15.084 [2024-11-19 14:14:12.903367] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:13:15.084 [2024-11-19 14:14:12.903375] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:13:15.084 [2024-11-19 14:14:12.910906] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:15.084 [2024-11-19 14:14:12.910925] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:15.084 [2024-11-19 14:14:12.918900] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:15.084 [2024-11-19 14:14:12.919434] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:13:15.084 [2024-11-19 14:14:12.927929] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:13:15.084 14:14:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.084 14:14:12 -- ublk/ublk.sh@68 -- # ublk_id=2 00:13:15.084 14:14:12 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:15.084 14:14:12 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:13:15.084 14:14:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.084 14:14:12 -- common/autotest_common.sh@10 -- # set +x 00:13:15.084 14:14:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.084 14:14:13 -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:13:15.084 14:14:13 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:13:15.084 14:14:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.084 14:14:13 -- common/autotest_common.sh@10 -- # set +x 00:13:15.084 [2024-11-19 14:14:13.110001] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:13:15.084 [2024-11-19 14:14:13.110329] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:13:15.084 [2024-11-19 14:14:13.110342] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:13:15.084 [2024-11-19 14:14:13.110348] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:13:15.084 [2024-11-19 14:14:13.117925] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:15.084 [2024-11-19 14:14:13.117941] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:15.084 [2024-11-19 14:14:13.125909] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:15.084 [2024-11-19 14:14:13.126426] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:13:15.084 [2024-11-19 14:14:13.129927] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:13:15.084 14:14:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.084 14:14:13 -- ublk/ublk.sh@68 -- # ublk_id=3 00:13:15.084 14:14:13 -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:13:15.084 14:14:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.084 14:14:13 -- common/autotest_common.sh@10 -- # set +x 00:13:15.084 14:14:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.084 14:14:13 -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:13:15.084 { 00:13:15.084 "ublk_device": "/dev/ublkb0", 00:13:15.084 "id": 0, 00:13:15.084 "queue_depth": 512, 00:13:15.084 "num_queues": 4, 00:13:15.084 "bdev_name": "Malloc0" 00:13:15.084 }, 00:13:15.084 { 00:13:15.084 "ublk_device": "/dev/ublkb1", 00:13:15.084 "id": 1, 00:13:15.084 "queue_depth": 512, 00:13:15.084 "num_queues": 4, 00:13:15.084 "bdev_name": "Malloc1" 00:13:15.084 }, 00:13:15.084 { 00:13:15.084 "ublk_device": "/dev/ublkb2", 00:13:15.084 "id": 2, 00:13:15.084 "queue_depth": 512, 00:13:15.084 "num_queues": 4, 00:13:15.084 "bdev_name": "Malloc2" 00:13:15.084 }, 00:13:15.084 { 00:13:15.084 "ublk_device": "/dev/ublkb3", 00:13:15.084 "id": 3, 00:13:15.084 "queue_depth": 512, 00:13:15.084 "num_queues": 4, 00:13:15.084 "bdev_name": "Malloc3" 00:13:15.084 } 00:13:15.084 ]' 00:13:15.084 14:14:13 -- ublk/ublk.sh@72 -- # seq 0 3 00:13:15.084 14:14:13 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:15.084 14:14:13 -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:13:15.084 14:14:13 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:15.084 14:14:13 -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:13:15.084 14:14:13 -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:13:15.084 14:14:13 -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:13:15.084 14:14:13 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:15.084 14:14:13 -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:13:15.084 14:14:13 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:15.084 14:14:13 -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:13:15.084 14:14:13 -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:15.084 14:14:13 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:15.084 14:14:13 -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:13:15.084 14:14:13 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:13:15.084 14:14:13 -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:13:15.084 14:14:13 -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:13:15.084 14:14:13 -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:13:15.084 14:14:13 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:15.084 14:14:13 -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:13:15.084 14:14:13 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:15.084 14:14:13 -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:13:15.084 14:14:13 -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:13:15.084 14:14:13 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:15.084 14:14:13 -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:13:15.084 14:14:13 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:13:15.084 14:14:13 -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:13:15.084 14:14:13 -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:13:15.084 14:14:13 -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:13:15.084 14:14:13 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:15.084 14:14:13 -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:13:15.084 14:14:13 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:15.084 14:14:13 -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:13:15.084 14:14:13 -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:13:15.084 14:14:13 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:15.084 14:14:13 -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:13:15.084 14:14:13 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:13:15.343 14:14:13 -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:13:15.343 14:14:13 -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:13:15.343 14:14:13 -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:13:15.343 14:14:13 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:15.343 14:14:13 -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:13:15.343 14:14:13 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:15.343 14:14:13 -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:13:15.343 14:14:13 -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:13:15.343 14:14:13 -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:13:15.343 14:14:13 -- ublk/ublk.sh@85 -- # seq 0 3 00:13:15.343 14:14:13 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:15.343 14:14:13 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:13:15.343 14:14:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.343 14:14:13 -- common/autotest_common.sh@10 -- # set +x 00:13:15.343 [2024-11-19 14:14:13.777970] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:15.343 [2024-11-19 14:14:13.821898] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:15.343 [2024-11-19 14:14:13.822703] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:15.343 [2024-11-19 14:14:13.829911] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:15.343 [2024-11-19 14:14:13.830144] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:15.343 [2024-11-19 14:14:13.830158] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:15.343 14:14:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.343 14:14:13 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:15.343 14:14:13 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:13:15.343 14:14:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.343 14:14:13 -- common/autotest_common.sh@10 -- # set +x 00:13:15.343 [2024-11-19 14:14:13.845958] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:13:15.343 [2024-11-19 14:14:13.877471] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:15.343 [2024-11-19 14:14:13.878503] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:13:15.343 [2024-11-19 14:14:13.884912] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:15.343 [2024-11-19 14:14:13.885149] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:13:15.343 [2024-11-19 14:14:13.885162] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:13:15.343 14:14:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.343 14:14:13 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:15.343 14:14:13 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:13:15.343 14:14:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.343 14:14:13 -- common/autotest_common.sh@10 -- # set +x 00:13:15.343 [2024-11-19 14:14:13.898961] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:13:15.601 [2024-11-19 14:14:13.940928] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:15.601 [2024-11-19 14:14:13.941616] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:13:15.601 [2024-11-19 14:14:13.944142] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:15.601 [2024-11-19 14:14:13.944379] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:13:15.601 [2024-11-19 14:14:13.944394] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:13:15.601 14:14:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.601 14:14:13 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:15.601 14:14:13 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:13:15.601 14:14:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.601 14:14:13 -- common/autotest_common.sh@10 -- # set +x 00:13:15.601 [2024-11-19 14:14:13.954960] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:13:15.601 [2024-11-19 14:14:13.993402] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:15.601 [2024-11-19 14:14:13.994363] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:13:15.601 [2024-11-19 14:14:14.002905] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:15.601 [2024-11-19 14:14:14.003140] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:13:15.601 [2024-11-19 14:14:14.003153] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:13:15.601 14:14:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.601 14:14:14 -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:13:15.860 [2024-11-19 14:14:14.186959] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:15.860 [2024-11-19 14:14:14.190818] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:15.860 [2024-11-19 14:14:14.190845] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:15.860 14:14:14 -- ublk/ublk.sh@93 -- # seq 0 3 00:13:15.860 14:14:14 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:15.860 14:14:14 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:15.860 14:14:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.860 14:14:14 -- common/autotest_common.sh@10 -- # set +x 00:13:16.119 14:14:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.119 14:14:14 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:16.119 14:14:14 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:13:16.119 14:14:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.119 14:14:14 -- common/autotest_common.sh@10 -- # set +x 00:13:16.687 14:14:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.687 14:14:14 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:16.687 14:14:14 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:13:16.687 14:14:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.687 14:14:14 -- common/autotest_common.sh@10 -- # set +x 00:13:16.945 14:14:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.945 14:14:15 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:16.945 14:14:15 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:13:16.945 14:14:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.945 14:14:15 -- common/autotest_common.sh@10 -- # set +x 00:13:17.204 14:14:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.204 14:14:15 -- ublk/ublk.sh@96 -- # check_leftover_devices 00:13:17.204 14:14:15 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:17.204 14:14:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.204 14:14:15 -- common/autotest_common.sh@10 -- # set +x 00:13:17.204 14:14:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.204 14:14:15 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:17.204 14:14:15 -- lvol/common.sh@26 -- # jq length 00:13:17.204 14:14:15 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:17.204 14:14:15 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:17.204 14:14:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.204 14:14:15 -- common/autotest_common.sh@10 -- # set +x 00:13:17.204 14:14:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.204 14:14:15 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:17.204 14:14:15 -- lvol/common.sh@28 -- # jq length 00:13:17.462 14:14:15 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:17.462 00:13:17.462 real 0m3.638s 00:13:17.462 user 0m0.798s 00:13:17.462 sys 0m0.137s 00:13:17.463 14:14:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:17.463 14:14:15 -- common/autotest_common.sh@10 -- # set +x 00:13:17.463 ************************************ 00:13:17.463 END TEST test_create_multi_ublk 00:13:17.463 ************************************ 00:13:17.463 14:14:15 -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:13:17.463 14:14:15 -- ublk/ublk.sh@147 -- # cleanup 00:13:17.463 14:14:15 -- ublk/ublk.sh@130 -- # killprocess 69212 00:13:17.463 14:14:15 -- common/autotest_common.sh@936 -- # '[' -z 69212 ']' 00:13:17.463 14:14:15 -- common/autotest_common.sh@940 -- # kill -0 69212 00:13:17.463 14:14:15 -- common/autotest_common.sh@941 -- # uname 00:13:17.463 14:14:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:17.463 14:14:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69212 00:13:17.463 14:14:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:17.463 killing process with pid 69212 00:13:17.463 14:14:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:17.463 14:14:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69212' 00:13:17.463 14:14:15 -- common/autotest_common.sh@955 -- # kill 69212 00:13:17.463 14:14:15 -- common/autotest_common.sh@960 -- # wait 69212 00:13:18.030 [2024-11-19 14:14:16.392660] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:18.030 [2024-11-19 14:14:16.392713] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:18.599 00:13:18.599 real 0m26.370s 00:13:18.599 user 0m37.613s 00:13:18.599 sys 0m9.891s 00:13:18.599 14:14:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:18.599 ************************************ 00:13:18.599 END TEST ublk 00:13:18.599 14:14:17 -- common/autotest_common.sh@10 -- # set +x 00:13:18.599 ************************************ 00:13:18.599 14:14:17 -- spdk/autotest.sh@247 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:18.599 14:14:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:18.599 14:14:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:18.599 14:14:17 -- common/autotest_common.sh@10 -- # set +x 00:13:18.861 ************************************ 00:13:18.861 START TEST ublk_recovery 00:13:18.861 ************************************ 00:13:18.861 14:14:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:18.861 * Looking for test storage... 00:13:18.861 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:18.861 14:14:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:18.861 14:14:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:18.861 14:14:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:18.861 14:14:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:18.861 14:14:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:18.861 14:14:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:18.861 14:14:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:18.861 14:14:17 -- scripts/common.sh@335 -- # IFS=.-: 00:13:18.861 14:14:17 -- scripts/common.sh@335 -- # read -ra ver1 00:13:18.861 14:14:17 -- scripts/common.sh@336 -- # IFS=.-: 00:13:18.861 14:14:17 -- scripts/common.sh@336 -- # read -ra ver2 00:13:18.861 14:14:17 -- scripts/common.sh@337 -- # local 'op=<' 00:13:18.861 14:14:17 -- scripts/common.sh@339 -- # ver1_l=2 00:13:18.861 14:14:17 -- scripts/common.sh@340 -- # ver2_l=1 00:13:18.861 14:14:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:18.861 14:14:17 -- scripts/common.sh@343 -- # case "$op" in 00:13:18.861 14:14:17 -- scripts/common.sh@344 -- # : 1 00:13:18.861 14:14:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:18.861 14:14:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:18.861 14:14:17 -- scripts/common.sh@364 -- # decimal 1 00:13:18.861 14:14:17 -- scripts/common.sh@352 -- # local d=1 00:13:18.861 14:14:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:18.861 14:14:17 -- scripts/common.sh@354 -- # echo 1 00:13:18.861 14:14:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:18.861 14:14:17 -- scripts/common.sh@365 -- # decimal 2 00:13:18.861 14:14:17 -- scripts/common.sh@352 -- # local d=2 00:13:18.861 14:14:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:18.861 14:14:17 -- scripts/common.sh@354 -- # echo 2 00:13:18.861 14:14:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:18.861 14:14:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:18.861 14:14:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:18.861 14:14:17 -- scripts/common.sh@367 -- # return 0 00:13:18.861 14:14:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:18.861 14:14:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:18.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.861 --rc genhtml_branch_coverage=1 00:13:18.861 --rc genhtml_function_coverage=1 00:13:18.861 --rc genhtml_legend=1 00:13:18.861 --rc geninfo_all_blocks=1 00:13:18.861 --rc geninfo_unexecuted_blocks=1 00:13:18.861 00:13:18.861 ' 00:13:18.861 14:14:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:18.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.861 --rc genhtml_branch_coverage=1 00:13:18.861 --rc genhtml_function_coverage=1 00:13:18.861 --rc genhtml_legend=1 00:13:18.861 --rc geninfo_all_blocks=1 00:13:18.861 --rc geninfo_unexecuted_blocks=1 00:13:18.861 00:13:18.861 ' 00:13:18.861 14:14:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:18.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.861 --rc genhtml_branch_coverage=1 00:13:18.861 --rc genhtml_function_coverage=1 00:13:18.861 --rc genhtml_legend=1 00:13:18.861 --rc geninfo_all_blocks=1 00:13:18.861 --rc geninfo_unexecuted_blocks=1 00:13:18.861 00:13:18.861 ' 00:13:18.861 14:14:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:18.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.861 --rc genhtml_branch_coverage=1 00:13:18.861 --rc genhtml_function_coverage=1 00:13:18.861 --rc genhtml_legend=1 00:13:18.861 --rc geninfo_all_blocks=1 00:13:18.861 --rc geninfo_unexecuted_blocks=1 00:13:18.861 00:13:18.861 ' 00:13:18.861 14:14:17 -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:18.861 14:14:17 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:18.861 14:14:17 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:18.861 14:14:17 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:18.861 14:14:17 -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:18.861 14:14:17 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:18.861 14:14:17 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:18.861 14:14:17 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:18.861 14:14:17 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:18.861 14:14:17 -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:13:18.861 14:14:17 -- ublk/ublk_recovery.sh@19 -- # spdk_pid=69618 00:13:18.861 14:14:17 -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:18.861 14:14:17 -- ublk/ublk_recovery.sh@21 -- # waitforlisten 69618 00:13:18.861 14:14:17 -- common/autotest_common.sh@829 -- # '[' -z 69618 ']' 00:13:18.861 14:14:17 -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:18.861 14:14:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.861 14:14:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:18.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.862 14:14:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.862 14:14:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:18.862 14:14:17 -- common/autotest_common.sh@10 -- # set +x 00:13:18.862 [2024-11-19 14:14:17.386044] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:18.862 [2024-11-19 14:14:17.386166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69618 ] 00:13:19.121 [2024-11-19 14:14:17.534866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:19.380 [2024-11-19 14:14:17.720101] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:19.380 [2024-11-19 14:14:17.720477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:19.380 [2024-11-19 14:14:17.720534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.770 14:14:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:20.770 14:14:18 -- common/autotest_common.sh@862 -- # return 0 00:13:20.770 14:14:18 -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:13:20.770 14:14:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.770 14:14:18 -- common/autotest_common.sh@10 -- # set +x 00:13:20.770 [2024-11-19 14:14:18.889543] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:20.770 14:14:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.770 14:14:18 -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:20.770 14:14:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.770 14:14:18 -- common/autotest_common.sh@10 -- # set +x 00:13:20.770 malloc0 00:13:20.770 14:14:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.770 14:14:18 -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:13:20.770 14:14:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.770 14:14:18 -- common/autotest_common.sh@10 -- # set +x 00:13:20.770 [2024-11-19 14:14:18.992006] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:13:20.770 [2024-11-19 14:14:18.992101] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:13:20.770 [2024-11-19 14:14:18.992107] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:20.770 [2024-11-19 14:14:18.992115] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:20.770 [2024-11-19 14:14:19.000994] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:20.770 [2024-11-19 14:14:19.001015] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:20.770 [2024-11-19 14:14:19.007904] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:20.770 [2024-11-19 14:14:19.008036] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:20.770 [2024-11-19 14:14:19.029907] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:20.770 1 00:13:20.770 14:14:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.770 14:14:19 -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:13:21.710 14:14:20 -- ublk/ublk_recovery.sh@31 -- # fio_proc=69660 00:13:21.710 14:14:20 -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:13:21.710 14:14:20 -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:13:21.710 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:21.710 fio-3.35 00:13:21.710 Starting 1 process 00:13:26.982 14:14:25 -- ublk/ublk_recovery.sh@36 -- # kill -9 69618 00:13:26.982 14:14:25 -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:13:32.272 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 69618 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:13:32.272 14:14:30 -- ublk/ublk_recovery.sh@42 -- # spdk_pid=69773 00:13:32.272 14:14:30 -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:32.272 14:14:30 -- ublk/ublk_recovery.sh@44 -- # waitforlisten 69773 00:13:32.272 14:14:30 -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:32.272 14:14:30 -- common/autotest_common.sh@829 -- # '[' -z 69773 ']' 00:13:32.272 14:14:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.272 14:14:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:32.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.272 14:14:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.272 14:14:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:32.272 14:14:30 -- common/autotest_common.sh@10 -- # set +x 00:13:32.272 [2024-11-19 14:14:30.133586] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:32.272 [2024-11-19 14:14:30.133729] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69773 ] 00:13:32.272 [2024-11-19 14:14:30.288218] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:32.272 [2024-11-19 14:14:30.459030] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:32.272 [2024-11-19 14:14:30.459561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.272 [2024-11-19 14:14:30.459615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.208 14:14:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:33.208 14:14:31 -- common/autotest_common.sh@862 -- # return 0 00:13:33.208 14:14:31 -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:13:33.208 14:14:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.208 14:14:31 -- common/autotest_common.sh@10 -- # set +x 00:13:33.208 [2024-11-19 14:14:31.625609] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:33.208 14:14:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.208 14:14:31 -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:33.208 14:14:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.208 14:14:31 -- common/autotest_common.sh@10 -- # set +x 00:13:33.208 malloc0 00:13:33.208 14:14:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.208 14:14:31 -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:13:33.208 14:14:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.208 14:14:31 -- common/autotest_common.sh@10 -- # set +x 00:13:33.208 [2024-11-19 14:14:31.717006] ublk.c:2073:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:13:33.208 [2024-11-19 14:14:31.717043] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:33.208 [2024-11-19 14:14:31.717051] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:13:33.208 [2024-11-19 14:14:31.724932] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:13:33.208 [2024-11-19 14:14:31.724950] ublk.c:2002:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:13:33.208 [2024-11-19 14:14:31.725019] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:13:33.208 1 00:13:33.208 14:14:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.208 14:14:31 -- ublk/ublk_recovery.sh@52 -- # wait 69660 00:13:33.208 [2024-11-19 14:14:31.732900] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:13:33.208 [2024-11-19 14:14:31.736716] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:13:33.208 [2024-11-19 14:14:31.741058] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:13:33.208 [2024-11-19 14:14:31.741077] ublk.c: 377:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:14:29.523 00:14:29.523 fio_test: (groupid=0, jobs=1): err= 0: pid=69669: Tue Nov 19 14:15:20 2024 00:14:29.523 read: IOPS=25.1k, BW=98.0MiB/s (103MB/s)(5881MiB/60002msec) 00:14:29.523 slat (nsec): min=1106, max=351416, avg=5495.54, stdev=1896.67 00:14:29.523 clat (usec): min=972, max=6704.9k, avg=2510.17, stdev=44034.04 00:14:29.523 lat (usec): min=978, max=6704.9k, avg=2515.66, stdev=44034.06 00:14:29.523 clat percentiles (usec): 00:14:29.523 | 1.00th=[ 1795], 5.00th=[ 1926], 10.00th=[ 1991], 20.00th=[ 2040], 00:14:29.523 | 30.00th=[ 2073], 40.00th=[ 2089], 50.00th=[ 2114], 60.00th=[ 2147], 00:14:29.523 | 70.00th=[ 2147], 80.00th=[ 2180], 90.00th=[ 2245], 95.00th=[ 3195], 00:14:29.523 | 99.00th=[ 4817], 99.50th=[ 5080], 99.90th=[ 6456], 99.95th=[ 7898], 00:14:29.523 | 99.99th=[13173] 00:14:29.523 bw ( KiB/s): min= 1720, max=121416, per=100.00%, avg=111592.37, stdev=14226.27, samples=107 00:14:29.523 iops : min= 430, max=30354, avg=27898.09, stdev=3556.57, samples=107 00:14:29.523 write: IOPS=25.1k, BW=97.9MiB/s (103MB/s)(5872MiB/60002msec); 0 zone resets 00:14:29.523 slat (nsec): min=1391, max=2941.1k, avg=5712.53, stdev=3096.87 00:14:29.523 clat (usec): min=682, max=6704.6k, avg=2582.86, stdev=43380.62 00:14:29.523 lat (usec): min=695, max=6704.6k, avg=2588.57, stdev=43380.64 00:14:29.523 clat percentiles (usec): 00:14:29.523 | 1.00th=[ 1795], 5.00th=[ 1991], 10.00th=[ 2057], 20.00th=[ 2147], 00:14:29.523 | 30.00th=[ 2180], 40.00th=[ 2212], 50.00th=[ 2212], 60.00th=[ 2245], 00:14:29.523 | 70.00th=[ 2245], 80.00th=[ 2278], 90.00th=[ 2343], 95.00th=[ 3163], 00:14:29.523 | 99.00th=[ 4817], 99.50th=[ 5080], 99.90th=[ 6587], 99.95th=[ 8094], 00:14:29.523 | 99.99th=[13173] 00:14:29.523 bw ( KiB/s): min= 1608, max=119928, per=100.00%, avg=111455.85, stdev=14126.11, samples=107 00:14:29.523 iops : min= 402, max=29982, avg=27863.96, stdev=3531.53, samples=107 00:14:29.523 lat (usec) : 750=0.01%, 1000=0.01% 00:14:29.523 lat (msec) : 2=8.54%, 4=88.98%, 10=2.44%, 20=0.03%, >=2000=0.01% 00:14:29.523 cpu : usr=5.52%, sys=28.76%, ctx=99124, majf=0, minf=14 00:14:29.523 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:14:29.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:29.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:29.523 issued rwts: total=1505434,1503358,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:29.523 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:29.523 00:14:29.523 Run status group 0 (all jobs): 00:14:29.523 READ: bw=98.0MiB/s (103MB/s), 98.0MiB/s-98.0MiB/s (103MB/s-103MB/s), io=5881MiB (6166MB), run=60002-60002msec 00:14:29.523 WRITE: bw=97.9MiB/s (103MB/s), 97.9MiB/s-97.9MiB/s (103MB/s-103MB/s), io=5872MiB (6158MB), run=60002-60002msec 00:14:29.523 00:14:29.523 Disk stats (read/write): 00:14:29.523 ublkb1: ios=1502395/1500360, merge=0/0, ticks=3672922/3652352, in_queue=7325274, util=99.92% 00:14:29.523 14:15:20 -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:14:29.523 14:15:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.523 14:15:20 -- common/autotest_common.sh@10 -- # set +x 00:14:29.523 [2024-11-19 14:15:20.297741] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:29.523 [2024-11-19 14:15:20.336913] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:29.523 [2024-11-19 14:15:20.337084] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:29.523 [2024-11-19 14:15:20.344908] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:29.523 [2024-11-19 14:15:20.345018] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:29.523 [2024-11-19 14:15:20.345026] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:29.523 14:15:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.523 14:15:20 -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:14:29.523 14:15:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.523 14:15:20 -- common/autotest_common.sh@10 -- # set +x 00:14:29.523 [2024-11-19 14:15:20.360964] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:29.523 [2024-11-19 14:15:20.368898] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:29.523 [2024-11-19 14:15:20.368933] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:29.523 14:15:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.523 14:15:20 -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:14:29.523 14:15:20 -- ublk/ublk_recovery.sh@59 -- # cleanup 00:14:29.523 14:15:20 -- ublk/ublk_recovery.sh@14 -- # killprocess 69773 00:14:29.523 14:15:20 -- common/autotest_common.sh@936 -- # '[' -z 69773 ']' 00:14:29.523 14:15:20 -- common/autotest_common.sh@940 -- # kill -0 69773 00:14:29.523 14:15:20 -- common/autotest_common.sh@941 -- # uname 00:14:29.523 14:15:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:29.523 14:15:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69773 00:14:29.523 14:15:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:29.523 14:15:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:29.523 killing process with pid 69773 00:14:29.523 14:15:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69773' 00:14:29.523 14:15:20 -- common/autotest_common.sh@955 -- # kill 69773 00:14:29.523 14:15:20 -- common/autotest_common.sh@960 -- # wait 69773 00:14:29.523 [2024-11-19 14:15:21.461044] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:29.523 [2024-11-19 14:15:21.461093] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:29.523 00:14:29.523 real 1m5.064s 00:14:29.523 user 1m42.691s 00:14:29.523 sys 0m37.241s 00:14:29.523 14:15:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:29.523 ************************************ 00:14:29.523 END TEST ublk_recovery 00:14:29.523 ************************************ 00:14:29.523 14:15:22 -- common/autotest_common.sh@10 -- # set +x 00:14:29.523 14:15:22 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:14:29.523 14:15:22 -- spdk/autotest.sh@255 -- # timing_exit lib 00:14:29.523 14:15:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:29.523 14:15:22 -- common/autotest_common.sh@10 -- # set +x 00:14:29.523 14:15:22 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:14:29.523 14:15:22 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:14:29.523 14:15:22 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:14:29.523 14:15:22 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:14:29.523 14:15:22 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:14:29.523 14:15:22 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:14:29.523 14:15:22 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:14:29.523 14:15:22 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:14:29.523 14:15:22 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:14:29.523 14:15:22 -- spdk/autotest.sh@329 -- # '[' 1 -eq 1 ']' 00:14:29.523 14:15:22 -- spdk/autotest.sh@330 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:29.523 14:15:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:29.523 14:15:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:29.523 14:15:22 -- common/autotest_common.sh@10 -- # set +x 00:14:29.524 ************************************ 00:14:29.524 START TEST ftl 00:14:29.524 ************************************ 00:14:29.524 14:15:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:29.524 * Looking for test storage... 00:14:29.524 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:29.524 14:15:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:29.524 14:15:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:29.524 14:15:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:29.524 14:15:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:29.524 14:15:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:29.524 14:15:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:29.524 14:15:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:29.524 14:15:22 -- scripts/common.sh@335 -- # IFS=.-: 00:14:29.524 14:15:22 -- scripts/common.sh@335 -- # read -ra ver1 00:14:29.524 14:15:22 -- scripts/common.sh@336 -- # IFS=.-: 00:14:29.524 14:15:22 -- scripts/common.sh@336 -- # read -ra ver2 00:14:29.524 14:15:22 -- scripts/common.sh@337 -- # local 'op=<' 00:14:29.524 14:15:22 -- scripts/common.sh@339 -- # ver1_l=2 00:14:29.524 14:15:22 -- scripts/common.sh@340 -- # ver2_l=1 00:14:29.524 14:15:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:29.524 14:15:22 -- scripts/common.sh@343 -- # case "$op" in 00:14:29.524 14:15:22 -- scripts/common.sh@344 -- # : 1 00:14:29.524 14:15:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:29.524 14:15:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:29.524 14:15:22 -- scripts/common.sh@364 -- # decimal 1 00:14:29.524 14:15:22 -- scripts/common.sh@352 -- # local d=1 00:14:29.524 14:15:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:29.524 14:15:22 -- scripts/common.sh@354 -- # echo 1 00:14:29.524 14:15:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:29.524 14:15:22 -- scripts/common.sh@365 -- # decimal 2 00:14:29.524 14:15:22 -- scripts/common.sh@352 -- # local d=2 00:14:29.524 14:15:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:29.524 14:15:22 -- scripts/common.sh@354 -- # echo 2 00:14:29.524 14:15:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:29.524 14:15:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:29.524 14:15:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:29.524 14:15:22 -- scripts/common.sh@367 -- # return 0 00:14:29.524 14:15:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:29.524 14:15:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:29.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.524 --rc genhtml_branch_coverage=1 00:14:29.524 --rc genhtml_function_coverage=1 00:14:29.524 --rc genhtml_legend=1 00:14:29.524 --rc geninfo_all_blocks=1 00:14:29.524 --rc geninfo_unexecuted_blocks=1 00:14:29.524 00:14:29.524 ' 00:14:29.524 14:15:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:29.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.524 --rc genhtml_branch_coverage=1 00:14:29.524 --rc genhtml_function_coverage=1 00:14:29.524 --rc genhtml_legend=1 00:14:29.524 --rc geninfo_all_blocks=1 00:14:29.524 --rc geninfo_unexecuted_blocks=1 00:14:29.524 00:14:29.524 ' 00:14:29.524 14:15:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:29.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.524 --rc genhtml_branch_coverage=1 00:14:29.524 --rc genhtml_function_coverage=1 00:14:29.524 --rc genhtml_legend=1 00:14:29.524 --rc geninfo_all_blocks=1 00:14:29.524 --rc geninfo_unexecuted_blocks=1 00:14:29.524 00:14:29.524 ' 00:14:29.524 14:15:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:29.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.524 --rc genhtml_branch_coverage=1 00:14:29.524 --rc genhtml_function_coverage=1 00:14:29.524 --rc genhtml_legend=1 00:14:29.524 --rc geninfo_all_blocks=1 00:14:29.524 --rc geninfo_unexecuted_blocks=1 00:14:29.524 00:14:29.524 ' 00:14:29.524 14:15:22 -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:29.524 14:15:22 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:29.524 14:15:22 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:29.524 14:15:22 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:29.524 14:15:22 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:29.524 14:15:22 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:29.524 14:15:22 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:29.524 14:15:22 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:29.524 14:15:22 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:29.524 14:15:22 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:29.524 14:15:22 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:29.524 14:15:22 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:29.524 14:15:22 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:29.524 14:15:22 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:29.524 14:15:22 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:29.524 14:15:22 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:29.524 14:15:22 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:29.524 14:15:22 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:29.524 14:15:22 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:29.524 14:15:22 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:29.524 14:15:22 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:29.524 14:15:22 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:29.524 14:15:22 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:29.524 14:15:22 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:29.524 14:15:22 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:29.524 14:15:22 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:29.524 14:15:22 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:29.524 14:15:22 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:29.524 14:15:22 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:29.524 14:15:22 -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:29.524 14:15:22 -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:14:29.524 14:15:22 -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:14:29.524 14:15:22 -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:14:29.524 14:15:22 -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:14:29.524 14:15:22 -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:29.524 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:29.524 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:29.524 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:29.524 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:29.524 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:29.524 14:15:22 -- ftl/ftl.sh@37 -- # spdk_tgt_pid=70583 00:14:29.524 14:15:22 -- ftl/ftl.sh@38 -- # waitforlisten 70583 00:14:29.524 14:15:22 -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:14:29.524 14:15:22 -- common/autotest_common.sh@829 -- # '[' -z 70583 ']' 00:14:29.524 14:15:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.524 14:15:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:29.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.524 14:15:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.524 14:15:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:29.524 14:15:22 -- common/autotest_common.sh@10 -- # set +x 00:14:29.524 [2024-11-19 14:15:23.060132] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:29.524 [2024-11-19 14:15:23.060240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70583 ] 00:14:29.524 [2024-11-19 14:15:23.209088] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.524 [2024-11-19 14:15:23.413610] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:29.524 [2024-11-19 14:15:23.413824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.524 14:15:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:29.524 14:15:23 -- common/autotest_common.sh@862 -- # return 0 00:14:29.524 14:15:23 -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:14:29.524 14:15:24 -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:29.524 14:15:24 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:14:29.524 14:15:24 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:29.524 14:15:25 -- ftl/ftl.sh@46 -- # cache_size=1310720 00:14:29.524 14:15:25 -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:29.524 14:15:25 -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:29.524 14:15:25 -- ftl/ftl.sh@47 -- # cache_disks=0000:00:06.0 00:14:29.524 14:15:25 -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:14:29.524 14:15:25 -- ftl/ftl.sh@49 -- # nv_cache=0000:00:06.0 00:14:29.524 14:15:25 -- ftl/ftl.sh@50 -- # break 00:14:29.524 14:15:25 -- ftl/ftl.sh@53 -- # '[' -z 0000:00:06.0 ']' 00:14:29.524 14:15:25 -- ftl/ftl.sh@59 -- # base_size=1310720 00:14:29.524 14:15:25 -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:29.525 14:15:25 -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:06.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:29.525 14:15:25 -- ftl/ftl.sh@60 -- # base_disks=0000:00:07.0 00:14:29.525 14:15:25 -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:14:29.525 14:15:25 -- ftl/ftl.sh@62 -- # device=0000:00:07.0 00:14:29.525 14:15:25 -- ftl/ftl.sh@63 -- # break 00:14:29.525 14:15:25 -- ftl/ftl.sh@66 -- # killprocess 70583 00:14:29.525 14:15:25 -- common/autotest_common.sh@936 -- # '[' -z 70583 ']' 00:14:29.525 14:15:25 -- common/autotest_common.sh@940 -- # kill -0 70583 00:14:29.525 14:15:25 -- common/autotest_common.sh@941 -- # uname 00:14:29.525 14:15:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:29.525 14:15:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70583 00:14:29.525 14:15:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:29.525 14:15:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:29.525 killing process with pid 70583 00:14:29.525 14:15:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70583' 00:14:29.525 14:15:25 -- common/autotest_common.sh@955 -- # kill 70583 00:14:29.525 14:15:25 -- common/autotest_common.sh@960 -- # wait 70583 00:14:29.525 14:15:27 -- ftl/ftl.sh@68 -- # '[' -z 0000:00:07.0 ']' 00:14:29.525 14:15:27 -- ftl/ftl.sh@73 -- # [[ -z '' ]] 00:14:29.525 14:15:27 -- ftl/ftl.sh@74 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:14:29.525 14:15:27 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:29.525 14:15:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:29.525 14:15:27 -- common/autotest_common.sh@10 -- # set +x 00:14:29.525 ************************************ 00:14:29.525 START TEST ftl_fio_basic 00:14:29.525 ************************************ 00:14:29.525 14:15:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:14:29.525 * Looking for test storage... 00:14:29.525 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:29.525 14:15:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:29.525 14:15:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:29.525 14:15:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:29.525 14:15:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:29.525 14:15:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:29.525 14:15:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:29.525 14:15:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:29.525 14:15:27 -- scripts/common.sh@335 -- # IFS=.-: 00:14:29.525 14:15:27 -- scripts/common.sh@335 -- # read -ra ver1 00:14:29.525 14:15:27 -- scripts/common.sh@336 -- # IFS=.-: 00:14:29.525 14:15:27 -- scripts/common.sh@336 -- # read -ra ver2 00:14:29.525 14:15:27 -- scripts/common.sh@337 -- # local 'op=<' 00:14:29.525 14:15:27 -- scripts/common.sh@339 -- # ver1_l=2 00:14:29.525 14:15:27 -- scripts/common.sh@340 -- # ver2_l=1 00:14:29.525 14:15:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:29.525 14:15:27 -- scripts/common.sh@343 -- # case "$op" in 00:14:29.525 14:15:27 -- scripts/common.sh@344 -- # : 1 00:14:29.525 14:15:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:29.525 14:15:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:29.525 14:15:27 -- scripts/common.sh@364 -- # decimal 1 00:14:29.525 14:15:27 -- scripts/common.sh@352 -- # local d=1 00:14:29.525 14:15:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:29.525 14:15:27 -- scripts/common.sh@354 -- # echo 1 00:14:29.525 14:15:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:29.525 14:15:27 -- scripts/common.sh@365 -- # decimal 2 00:14:29.525 14:15:27 -- scripts/common.sh@352 -- # local d=2 00:14:29.525 14:15:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:29.525 14:15:27 -- scripts/common.sh@354 -- # echo 2 00:14:29.525 14:15:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:29.525 14:15:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:29.525 14:15:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:29.525 14:15:27 -- scripts/common.sh@367 -- # return 0 00:14:29.525 14:15:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:29.525 14:15:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:29.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.525 --rc genhtml_branch_coverage=1 00:14:29.525 --rc genhtml_function_coverage=1 00:14:29.525 --rc genhtml_legend=1 00:14:29.525 --rc geninfo_all_blocks=1 00:14:29.525 --rc geninfo_unexecuted_blocks=1 00:14:29.525 00:14:29.525 ' 00:14:29.525 14:15:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:29.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.525 --rc genhtml_branch_coverage=1 00:14:29.525 --rc genhtml_function_coverage=1 00:14:29.525 --rc genhtml_legend=1 00:14:29.525 --rc geninfo_all_blocks=1 00:14:29.525 --rc geninfo_unexecuted_blocks=1 00:14:29.525 00:14:29.525 ' 00:14:29.525 14:15:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:29.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.525 --rc genhtml_branch_coverage=1 00:14:29.525 --rc genhtml_function_coverage=1 00:14:29.525 --rc genhtml_legend=1 00:14:29.525 --rc geninfo_all_blocks=1 00:14:29.525 --rc geninfo_unexecuted_blocks=1 00:14:29.525 00:14:29.525 ' 00:14:29.525 14:15:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:29.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.525 --rc genhtml_branch_coverage=1 00:14:29.525 --rc genhtml_function_coverage=1 00:14:29.525 --rc genhtml_legend=1 00:14:29.525 --rc geninfo_all_blocks=1 00:14:29.525 --rc geninfo_unexecuted_blocks=1 00:14:29.525 00:14:29.525 ' 00:14:29.525 14:15:27 -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:29.525 14:15:27 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:14:29.525 14:15:27 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:29.525 14:15:27 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:29.525 14:15:27 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:29.525 14:15:27 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:29.525 14:15:27 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:29.525 14:15:27 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:29.525 14:15:27 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:29.525 14:15:27 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:29.525 14:15:27 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:29.525 14:15:27 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:29.525 14:15:27 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:29.525 14:15:27 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:29.525 14:15:27 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:29.525 14:15:27 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:29.525 14:15:27 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:29.525 14:15:27 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:29.525 14:15:27 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:29.525 14:15:27 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:29.525 14:15:27 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:29.525 14:15:27 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:29.525 14:15:27 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:29.525 14:15:27 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:29.525 14:15:27 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:29.525 14:15:27 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:29.525 14:15:27 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:29.525 14:15:27 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:29.525 14:15:27 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:29.525 14:15:27 -- ftl/fio.sh@11 -- # declare -A suite 00:14:29.525 14:15:27 -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:29.525 14:15:27 -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:14:29.525 14:15:27 -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:14:29.525 14:15:27 -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:29.525 14:15:27 -- ftl/fio.sh@23 -- # device=0000:00:07.0 00:14:29.525 14:15:27 -- ftl/fio.sh@24 -- # cache_device=0000:00:06.0 00:14:29.525 14:15:27 -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:29.525 14:15:27 -- ftl/fio.sh@26 -- # uuid= 00:14:29.525 14:15:27 -- ftl/fio.sh@27 -- # timeout=240 00:14:29.525 14:15:27 -- ftl/fio.sh@29 -- # [[ y != y ]] 00:14:29.525 14:15:27 -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:14:29.525 14:15:27 -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:14:29.525 14:15:27 -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:14:29.525 14:15:27 -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:29.526 14:15:27 -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:29.526 14:15:27 -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:14:29.526 14:15:27 -- ftl/fio.sh@45 -- # svcpid=70724 00:14:29.526 14:15:27 -- ftl/fio.sh@46 -- # waitforlisten 70724 00:14:29.526 14:15:27 -- common/autotest_common.sh@829 -- # '[' -z 70724 ']' 00:14:29.526 14:15:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.526 14:15:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:29.526 14:15:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.526 14:15:27 -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:14:29.526 14:15:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:29.526 14:15:27 -- common/autotest_common.sh@10 -- # set +x 00:14:29.526 [2024-11-19 14:15:27.559862] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:29.526 [2024-11-19 14:15:27.559990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70724 ] 00:14:29.526 [2024-11-19 14:15:27.706690] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:29.526 [2024-11-19 14:15:27.877539] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:29.526 [2024-11-19 14:15:27.877949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.526 [2024-11-19 14:15:27.878198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:29.526 [2024-11-19 14:15:27.878265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.461 14:15:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:30.461 14:15:29 -- common/autotest_common.sh@862 -- # return 0 00:14:30.461 14:15:29 -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:14:30.461 14:15:29 -- ftl/common.sh@54 -- # local name=nvme0 00:14:30.461 14:15:29 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:14:30.461 14:15:29 -- ftl/common.sh@56 -- # local size=103424 00:14:30.461 14:15:29 -- ftl/common.sh@59 -- # local base_bdev 00:14:30.461 14:15:29 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:14:30.727 14:15:29 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:14:30.728 14:15:29 -- ftl/common.sh@62 -- # local base_size 00:14:30.728 14:15:29 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:14:30.728 14:15:29 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:14:30.728 14:15:29 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:30.728 14:15:29 -- common/autotest_common.sh@1369 -- # local bs 00:14:30.728 14:15:29 -- common/autotest_common.sh@1370 -- # local nb 00:14:30.728 14:15:29 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:14:30.989 14:15:29 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:30.989 { 00:14:30.989 "name": "nvme0n1", 00:14:30.989 "aliases": [ 00:14:30.989 "5f98d6a9-2d13-4cde-9dc6-3ad0b1e0972d" 00:14:30.989 ], 00:14:30.989 "product_name": "NVMe disk", 00:14:30.989 "block_size": 4096, 00:14:30.989 "num_blocks": 1310720, 00:14:30.989 "uuid": "5f98d6a9-2d13-4cde-9dc6-3ad0b1e0972d", 00:14:30.989 "assigned_rate_limits": { 00:14:30.989 "rw_ios_per_sec": 0, 00:14:30.989 "rw_mbytes_per_sec": 0, 00:14:30.989 "r_mbytes_per_sec": 0, 00:14:30.989 "w_mbytes_per_sec": 0 00:14:30.989 }, 00:14:30.989 "claimed": false, 00:14:30.989 "zoned": false, 00:14:30.989 "supported_io_types": { 00:14:30.989 "read": true, 00:14:30.989 "write": true, 00:14:30.989 "unmap": true, 00:14:30.989 "write_zeroes": true, 00:14:30.989 "flush": true, 00:14:30.989 "reset": true, 00:14:30.989 "compare": true, 00:14:30.989 "compare_and_write": false, 00:14:30.989 "abort": true, 00:14:30.989 "nvme_admin": true, 00:14:30.989 "nvme_io": true 00:14:30.989 }, 00:14:30.989 "driver_specific": { 00:14:30.989 "nvme": [ 00:14:30.989 { 00:14:30.989 "pci_address": "0000:00:07.0", 00:14:30.989 "trid": { 00:14:30.989 "trtype": "PCIe", 00:14:30.989 "traddr": "0000:00:07.0" 00:14:30.989 }, 00:14:30.989 "ctrlr_data": { 00:14:30.989 "cntlid": 0, 00:14:30.989 "vendor_id": "0x1b36", 00:14:30.989 "model_number": "QEMU NVMe Ctrl", 00:14:30.989 "serial_number": "12341", 00:14:30.989 "firmware_revision": "8.0.0", 00:14:30.989 "subnqn": "nqn.2019-08.org.qemu:12341", 00:14:30.989 "oacs": { 00:14:30.989 "security": 0, 00:14:30.989 "format": 1, 00:14:30.989 "firmware": 0, 00:14:30.989 "ns_manage": 1 00:14:30.989 }, 00:14:30.989 "multi_ctrlr": false, 00:14:30.989 "ana_reporting": false 00:14:30.989 }, 00:14:30.989 "vs": { 00:14:30.989 "nvme_version": "1.4" 00:14:30.989 }, 00:14:30.989 "ns_data": { 00:14:30.989 "id": 1, 00:14:30.989 "can_share": false 00:14:30.989 } 00:14:30.989 } 00:14:30.989 ], 00:14:30.989 "mp_policy": "active_passive" 00:14:30.989 } 00:14:30.989 } 00:14:30.989 ]' 00:14:30.989 14:15:29 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:30.989 14:15:29 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:30.989 14:15:29 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:30.989 14:15:29 -- common/autotest_common.sh@1373 -- # nb=1310720 00:14:30.989 14:15:29 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:14:30.989 14:15:29 -- common/autotest_common.sh@1377 -- # echo 5120 00:14:30.989 14:15:29 -- ftl/common.sh@63 -- # base_size=5120 00:14:30.989 14:15:29 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:14:30.989 14:15:29 -- ftl/common.sh@67 -- # clear_lvols 00:14:30.989 14:15:29 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:30.989 14:15:29 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:14:31.248 14:15:29 -- ftl/common.sh@28 -- # stores= 00:14:31.248 14:15:29 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:14:31.508 14:15:29 -- ftl/common.sh@68 -- # lvs=c4b8b2a2-3b7c-40db-9c7e-7d7c5af3b77e 00:14:31.508 14:15:29 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c4b8b2a2-3b7c-40db-9c7e-7d7c5af3b77e 00:14:31.767 14:15:30 -- ftl/fio.sh@48 -- # split_bdev=41b9ec99-f21e-4ada-b396-73406753f5c6 00:14:31.767 14:15:30 -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:06.0 41b9ec99-f21e-4ada-b396-73406753f5c6 00:14:31.767 14:15:30 -- ftl/common.sh@35 -- # local name=nvc0 00:14:31.767 14:15:30 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:14:31.767 14:15:30 -- ftl/common.sh@37 -- # local base_bdev=41b9ec99-f21e-4ada-b396-73406753f5c6 00:14:31.767 14:15:30 -- ftl/common.sh@38 -- # local cache_size= 00:14:31.767 14:15:30 -- ftl/common.sh@41 -- # get_bdev_size 41b9ec99-f21e-4ada-b396-73406753f5c6 00:14:31.767 14:15:30 -- common/autotest_common.sh@1367 -- # local bdev_name=41b9ec99-f21e-4ada-b396-73406753f5c6 00:14:31.767 14:15:30 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:31.767 14:15:30 -- common/autotest_common.sh@1369 -- # local bs 00:14:31.767 14:15:30 -- common/autotest_common.sh@1370 -- # local nb 00:14:31.767 14:15:30 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 41b9ec99-f21e-4ada-b396-73406753f5c6 00:14:31.767 14:15:30 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:31.767 { 00:14:31.767 "name": "41b9ec99-f21e-4ada-b396-73406753f5c6", 00:14:31.767 "aliases": [ 00:14:31.767 "lvs/nvme0n1p0" 00:14:31.767 ], 00:14:31.767 "product_name": "Logical Volume", 00:14:31.767 "block_size": 4096, 00:14:31.767 "num_blocks": 26476544, 00:14:31.767 "uuid": "41b9ec99-f21e-4ada-b396-73406753f5c6", 00:14:31.767 "assigned_rate_limits": { 00:14:31.767 "rw_ios_per_sec": 0, 00:14:31.767 "rw_mbytes_per_sec": 0, 00:14:31.767 "r_mbytes_per_sec": 0, 00:14:31.767 "w_mbytes_per_sec": 0 00:14:31.767 }, 00:14:31.767 "claimed": false, 00:14:31.767 "zoned": false, 00:14:31.767 "supported_io_types": { 00:14:31.767 "read": true, 00:14:31.767 "write": true, 00:14:31.767 "unmap": true, 00:14:31.767 "write_zeroes": true, 00:14:31.767 "flush": false, 00:14:31.767 "reset": true, 00:14:31.767 "compare": false, 00:14:31.767 "compare_and_write": false, 00:14:31.767 "abort": false, 00:14:31.767 "nvme_admin": false, 00:14:31.767 "nvme_io": false 00:14:31.767 }, 00:14:31.767 "driver_specific": { 00:14:31.767 "lvol": { 00:14:31.767 "lvol_store_uuid": "c4b8b2a2-3b7c-40db-9c7e-7d7c5af3b77e", 00:14:31.767 "base_bdev": "nvme0n1", 00:14:31.767 "thin_provision": true, 00:14:31.767 "snapshot": false, 00:14:31.767 "clone": false, 00:14:31.767 "esnap_clone": false 00:14:31.767 } 00:14:31.767 } 00:14:31.767 } 00:14:31.767 ]' 00:14:31.767 14:15:30 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:31.767 14:15:30 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:31.767 14:15:30 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:32.025 14:15:30 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:32.025 14:15:30 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:32.025 14:15:30 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:32.026 14:15:30 -- ftl/common.sh@41 -- # local base_size=5171 00:14:32.026 14:15:30 -- ftl/common.sh@44 -- # local nvc_bdev 00:14:32.026 14:15:30 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:14:32.026 14:15:30 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:14:32.026 14:15:30 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:14:32.026 14:15:30 -- ftl/common.sh@48 -- # get_bdev_size 41b9ec99-f21e-4ada-b396-73406753f5c6 00:14:32.026 14:15:30 -- common/autotest_common.sh@1367 -- # local bdev_name=41b9ec99-f21e-4ada-b396-73406753f5c6 00:14:32.026 14:15:30 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:32.026 14:15:30 -- common/autotest_common.sh@1369 -- # local bs 00:14:32.026 14:15:30 -- common/autotest_common.sh@1370 -- # local nb 00:14:32.026 14:15:30 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 41b9ec99-f21e-4ada-b396-73406753f5c6 00:14:32.285 14:15:30 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:32.285 { 00:14:32.285 "name": "41b9ec99-f21e-4ada-b396-73406753f5c6", 00:14:32.285 "aliases": [ 00:14:32.285 "lvs/nvme0n1p0" 00:14:32.285 ], 00:14:32.285 "product_name": "Logical Volume", 00:14:32.285 "block_size": 4096, 00:14:32.285 "num_blocks": 26476544, 00:14:32.285 "uuid": "41b9ec99-f21e-4ada-b396-73406753f5c6", 00:14:32.285 "assigned_rate_limits": { 00:14:32.285 "rw_ios_per_sec": 0, 00:14:32.285 "rw_mbytes_per_sec": 0, 00:14:32.285 "r_mbytes_per_sec": 0, 00:14:32.285 "w_mbytes_per_sec": 0 00:14:32.285 }, 00:14:32.285 "claimed": false, 00:14:32.285 "zoned": false, 00:14:32.285 "supported_io_types": { 00:14:32.285 "read": true, 00:14:32.285 "write": true, 00:14:32.285 "unmap": true, 00:14:32.285 "write_zeroes": true, 00:14:32.285 "flush": false, 00:14:32.285 "reset": true, 00:14:32.285 "compare": false, 00:14:32.285 "compare_and_write": false, 00:14:32.285 "abort": false, 00:14:32.285 "nvme_admin": false, 00:14:32.285 "nvme_io": false 00:14:32.285 }, 00:14:32.285 "driver_specific": { 00:14:32.285 "lvol": { 00:14:32.285 "lvol_store_uuid": "c4b8b2a2-3b7c-40db-9c7e-7d7c5af3b77e", 00:14:32.285 "base_bdev": "nvme0n1", 00:14:32.285 "thin_provision": true, 00:14:32.285 "snapshot": false, 00:14:32.285 "clone": false, 00:14:32.285 "esnap_clone": false 00:14:32.285 } 00:14:32.285 } 00:14:32.285 } 00:14:32.285 ]' 00:14:32.285 14:15:30 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:32.285 14:15:30 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:32.285 14:15:30 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:32.285 14:15:30 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:32.285 14:15:30 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:32.285 14:15:30 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:32.285 14:15:30 -- ftl/common.sh@48 -- # cache_size=5171 00:14:32.285 14:15:30 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:14:32.544 14:15:31 -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:14:32.544 14:15:31 -- ftl/fio.sh@51 -- # l2p_percentage=60 00:14:32.544 14:15:31 -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:14:32.544 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:14:32.544 14:15:31 -- ftl/fio.sh@56 -- # get_bdev_size 41b9ec99-f21e-4ada-b396-73406753f5c6 00:14:32.544 14:15:31 -- common/autotest_common.sh@1367 -- # local bdev_name=41b9ec99-f21e-4ada-b396-73406753f5c6 00:14:32.544 14:15:31 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:32.544 14:15:31 -- common/autotest_common.sh@1369 -- # local bs 00:14:32.544 14:15:31 -- common/autotest_common.sh@1370 -- # local nb 00:14:32.544 14:15:31 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 41b9ec99-f21e-4ada-b396-73406753f5c6 00:14:32.803 14:15:31 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:32.803 { 00:14:32.803 "name": "41b9ec99-f21e-4ada-b396-73406753f5c6", 00:14:32.803 "aliases": [ 00:14:32.803 "lvs/nvme0n1p0" 00:14:32.803 ], 00:14:32.803 "product_name": "Logical Volume", 00:14:32.803 "block_size": 4096, 00:14:32.803 "num_blocks": 26476544, 00:14:32.803 "uuid": "41b9ec99-f21e-4ada-b396-73406753f5c6", 00:14:32.803 "assigned_rate_limits": { 00:14:32.803 "rw_ios_per_sec": 0, 00:14:32.803 "rw_mbytes_per_sec": 0, 00:14:32.803 "r_mbytes_per_sec": 0, 00:14:32.803 "w_mbytes_per_sec": 0 00:14:32.803 }, 00:14:32.803 "claimed": false, 00:14:32.803 "zoned": false, 00:14:32.803 "supported_io_types": { 00:14:32.803 "read": true, 00:14:32.803 "write": true, 00:14:32.803 "unmap": true, 00:14:32.803 "write_zeroes": true, 00:14:32.803 "flush": false, 00:14:32.803 "reset": true, 00:14:32.803 "compare": false, 00:14:32.803 "compare_and_write": false, 00:14:32.803 "abort": false, 00:14:32.803 "nvme_admin": false, 00:14:32.803 "nvme_io": false 00:14:32.803 }, 00:14:32.803 "driver_specific": { 00:14:32.803 "lvol": { 00:14:32.803 "lvol_store_uuid": "c4b8b2a2-3b7c-40db-9c7e-7d7c5af3b77e", 00:14:32.803 "base_bdev": "nvme0n1", 00:14:32.803 "thin_provision": true, 00:14:32.803 "snapshot": false, 00:14:32.803 "clone": false, 00:14:32.803 "esnap_clone": false 00:14:32.803 } 00:14:32.803 } 00:14:32.803 } 00:14:32.803 ]' 00:14:32.803 14:15:31 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:32.803 14:15:31 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:32.803 14:15:31 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:32.803 14:15:31 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:32.803 14:15:31 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:32.803 14:15:31 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:32.803 14:15:31 -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:14:32.803 14:15:31 -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:14:32.803 14:15:31 -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 41b9ec99-f21e-4ada-b396-73406753f5c6 -c nvc0n1p0 --l2p_dram_limit 60 00:14:33.063 [2024-11-19 14:15:31.423605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.063 [2024-11-19 14:15:31.423983] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:14:33.063 [2024-11-19 14:15:31.424054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:14:33.063 [2024-11-19 14:15:31.424091] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.063 [2024-11-19 14:15:31.424190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.063 [2024-11-19 14:15:31.424226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:33.063 [2024-11-19 14:15:31.424266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:14:33.063 [2024-11-19 14:15:31.424302] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.063 [2024-11-19 14:15:31.424355] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:14:33.063 [2024-11-19 14:15:31.424984] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:14:33.063 [2024-11-19 14:15:31.425056] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.063 [2024-11-19 14:15:31.425090] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:33.063 [2024-11-19 14:15:31.425128] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:14:33.063 [2024-11-19 14:15:31.425160] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.063 [2024-11-19 14:15:31.425523] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a566c162-df9b-4752-81fe-6e97479b16d6 00:14:33.063 [2024-11-19 14:15:31.426928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.063 [2024-11-19 14:15:31.427004] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:14:33.063 [2024-11-19 14:15:31.427046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:14:33.063 [2024-11-19 14:15:31.427084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.063 [2024-11-19 14:15:31.433940] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.063 [2024-11-19 14:15:31.434013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:33.063 [2024-11-19 14:15:31.434052] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.777 ms 00:14:33.063 [2024-11-19 14:15:31.434086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.064 [2024-11-19 14:15:31.434180] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.064 [2024-11-19 14:15:31.434213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:33.064 [2024-11-19 14:15:31.434250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:14:33.064 [2024-11-19 14:15:31.434287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.064 [2024-11-19 14:15:31.434371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.064 [2024-11-19 14:15:31.434414] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:14:33.064 [2024-11-19 14:15:31.434450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:14:33.064 [2024-11-19 14:15:31.434495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.064 [2024-11-19 14:15:31.434553] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:14:33.064 [2024-11-19 14:15:31.437919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.064 [2024-11-19 14:15:31.437941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:33.064 [2024-11-19 14:15:31.437950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.372 ms 00:14:33.064 [2024-11-19 14:15:31.437956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.064 [2024-11-19 14:15:31.437995] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.064 [2024-11-19 14:15:31.438002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:14:33.064 [2024-11-19 14:15:31.438010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:14:33.064 [2024-11-19 14:15:31.438016] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.064 [2024-11-19 14:15:31.438043] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:14:33.064 [2024-11-19 14:15:31.438136] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:14:33.064 [2024-11-19 14:15:31.438154] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:14:33.064 [2024-11-19 14:15:31.438163] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:14:33.064 [2024-11-19 14:15:31.438173] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:14:33.064 [2024-11-19 14:15:31.438181] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:14:33.064 [2024-11-19 14:15:31.438189] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:14:33.064 [2024-11-19 14:15:31.438195] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:14:33.064 [2024-11-19 14:15:31.438205] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:14:33.064 [2024-11-19 14:15:31.438211] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:14:33.064 [2024-11-19 14:15:31.438218] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.064 [2024-11-19 14:15:31.438224] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:14:33.064 [2024-11-19 14:15:31.438231] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.177 ms 00:14:33.064 [2024-11-19 14:15:31.438237] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.064 [2024-11-19 14:15:31.438294] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.064 [2024-11-19 14:15:31.438301] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:14:33.064 [2024-11-19 14:15:31.438309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:14:33.064 [2024-11-19 14:15:31.438315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.064 [2024-11-19 14:15:31.438414] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:14:33.064 [2024-11-19 14:15:31.438423] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:14:33.064 [2024-11-19 14:15:31.438431] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:33.064 [2024-11-19 14:15:31.438437] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:33.064 [2024-11-19 14:15:31.438445] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:14:33.064 [2024-11-19 14:15:31.438450] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:14:33.064 [2024-11-19 14:15:31.438457] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:14:33.064 [2024-11-19 14:15:31.438462] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:14:33.064 [2024-11-19 14:15:31.438472] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:14:33.064 [2024-11-19 14:15:31.438477] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:33.064 [2024-11-19 14:15:31.438484] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:14:33.064 [2024-11-19 14:15:31.438490] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:14:33.064 [2024-11-19 14:15:31.438499] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:33.064 [2024-11-19 14:15:31.438504] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:14:33.064 [2024-11-19 14:15:31.438510] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:14:33.064 [2024-11-19 14:15:31.438515] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:33.064 [2024-11-19 14:15:31.438524] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:14:33.064 [2024-11-19 14:15:31.438529] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:14:33.064 [2024-11-19 14:15:31.438535] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:33.064 [2024-11-19 14:15:31.438541] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:14:33.064 [2024-11-19 14:15:31.438548] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:14:33.064 [2024-11-19 14:15:31.438552] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:14:33.064 [2024-11-19 14:15:31.438559] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:14:33.064 [2024-11-19 14:15:31.438563] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:14:33.064 [2024-11-19 14:15:31.438570] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:33.064 [2024-11-19 14:15:31.438575] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:14:33.064 [2024-11-19 14:15:31.438582] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:14:33.064 [2024-11-19 14:15:31.438586] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:33.064 [2024-11-19 14:15:31.438593] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:14:33.064 [2024-11-19 14:15:31.438598] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:14:33.064 [2024-11-19 14:15:31.438604] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:33.064 [2024-11-19 14:15:31.438609] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:14:33.064 [2024-11-19 14:15:31.438617] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:14:33.064 [2024-11-19 14:15:31.438635] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:33.064 [2024-11-19 14:15:31.438641] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:14:33.064 [2024-11-19 14:15:31.438646] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:14:33.064 [2024-11-19 14:15:31.438653] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:33.064 [2024-11-19 14:15:31.438658] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:14:33.064 [2024-11-19 14:15:31.438665] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:14:33.064 [2024-11-19 14:15:31.438670] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:33.064 [2024-11-19 14:15:31.438678] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:14:33.064 [2024-11-19 14:15:31.438684] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:14:33.064 [2024-11-19 14:15:31.438691] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:33.064 [2024-11-19 14:15:31.438696] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:33.064 [2024-11-19 14:15:31.438704] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:14:33.064 [2024-11-19 14:15:31.438710] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:14:33.064 [2024-11-19 14:15:31.438716] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:14:33.064 [2024-11-19 14:15:31.438721] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:14:33.064 [2024-11-19 14:15:31.438729] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:14:33.064 [2024-11-19 14:15:31.438735] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:14:33.064 [2024-11-19 14:15:31.438743] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:14:33.064 [2024-11-19 14:15:31.438751] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:33.064 [2024-11-19 14:15:31.438762] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:14:33.064 [2024-11-19 14:15:31.438767] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:14:33.064 [2024-11-19 14:15:31.438774] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:14:33.064 [2024-11-19 14:15:31.438780] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:14:33.064 [2024-11-19 14:15:31.438787] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:14:33.064 [2024-11-19 14:15:31.438793] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:14:33.064 [2024-11-19 14:15:31.438801] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:14:33.064 [2024-11-19 14:15:31.438806] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:14:33.064 [2024-11-19 14:15:31.438813] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:14:33.064 [2024-11-19 14:15:31.438819] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:14:33.064 [2024-11-19 14:15:31.438826] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:14:33.064 [2024-11-19 14:15:31.438831] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:14:33.064 [2024-11-19 14:15:31.438840] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:14:33.065 [2024-11-19 14:15:31.438845] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:14:33.065 [2024-11-19 14:15:31.438855] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:33.065 [2024-11-19 14:15:31.438864] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:14:33.065 [2024-11-19 14:15:31.438871] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:14:33.065 [2024-11-19 14:15:31.438888] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:14:33.065 [2024-11-19 14:15:31.438895] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:14:33.065 [2024-11-19 14:15:31.438901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.065 [2024-11-19 14:15:31.438911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:14:33.065 [2024-11-19 14:15:31.438917] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:14:33.065 [2024-11-19 14:15:31.438924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.065 [2024-11-19 14:15:31.452825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.065 [2024-11-19 14:15:31.452855] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:33.065 [2024-11-19 14:15:31.452864] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.851 ms 00:14:33.065 [2024-11-19 14:15:31.452873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.065 [2024-11-19 14:15:31.452960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.065 [2024-11-19 14:15:31.452972] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:14:33.065 [2024-11-19 14:15:31.452979] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:14:33.065 [2024-11-19 14:15:31.452986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.065 [2024-11-19 14:15:31.481140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.065 [2024-11-19 14:15:31.481165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:33.065 [2024-11-19 14:15:31.481175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.107 ms 00:14:33.065 [2024-11-19 14:15:31.481183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.065 [2024-11-19 14:15:31.481225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.065 [2024-11-19 14:15:31.481232] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:33.065 [2024-11-19 14:15:31.481247] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:14:33.065 [2024-11-19 14:15:31.481255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.065 [2024-11-19 14:15:31.481659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.065 [2024-11-19 14:15:31.481684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:33.065 [2024-11-19 14:15:31.481692] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:14:33.065 [2024-11-19 14:15:31.481700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.065 [2024-11-19 14:15:31.481805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.065 [2024-11-19 14:15:31.481821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:33.065 [2024-11-19 14:15:31.481828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:14:33.065 [2024-11-19 14:15:31.481836] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.065 [2024-11-19 14:15:31.513414] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.065 [2024-11-19 14:15:31.513444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:33.065 [2024-11-19 14:15:31.513454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.555 ms 00:14:33.065 [2024-11-19 14:15:31.513462] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.065 [2024-11-19 14:15:31.523525] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:14:33.065 [2024-11-19 14:15:31.538897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.065 [2024-11-19 14:15:31.538923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:14:33.065 [2024-11-19 14:15:31.538934] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.340 ms 00:14:33.065 [2024-11-19 14:15:31.538940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.065 [2024-11-19 14:15:31.591936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.065 [2024-11-19 14:15:31.591974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:14:33.065 [2024-11-19 14:15:31.591988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.962 ms 00:14:33.065 [2024-11-19 14:15:31.591997] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.065 [2024-11-19 14:15:31.592046] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:14:33.065 [2024-11-19 14:15:31.592059] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:14:36.359 [2024-11-19 14:15:34.319838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.359 [2024-11-19 14:15:34.319912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:14:36.359 [2024-11-19 14:15:34.319931] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2727.781 ms 00:14:36.359 [2024-11-19 14:15:34.319940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.359 [2024-11-19 14:15:34.320146] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.359 [2024-11-19 14:15:34.320158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:14:36.359 [2024-11-19 14:15:34.320169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:14:36.359 [2024-11-19 14:15:34.320177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.359 [2024-11-19 14:15:34.343578] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.359 [2024-11-19 14:15:34.343607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:14:36.359 [2024-11-19 14:15:34.343621] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.347 ms 00:14:36.359 [2024-11-19 14:15:34.343629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.360 [2024-11-19 14:15:34.366160] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.360 [2024-11-19 14:15:34.366186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:14:36.360 [2024-11-19 14:15:34.366202] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.488 ms 00:14:36.360 [2024-11-19 14:15:34.366210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.360 [2024-11-19 14:15:34.366531] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.360 [2024-11-19 14:15:34.366552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:14:36.360 [2024-11-19 14:15:34.366562] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:14:36.360 [2024-11-19 14:15:34.366570] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.360 [2024-11-19 14:15:34.435890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.360 [2024-11-19 14:15:34.435925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:14:36.360 [2024-11-19 14:15:34.435939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.266 ms 00:14:36.360 [2024-11-19 14:15:34.435947] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.360 [2024-11-19 14:15:34.460078] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.360 [2024-11-19 14:15:34.460108] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:14:36.360 [2024-11-19 14:15:34.460123] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.080 ms 00:14:36.360 [2024-11-19 14:15:34.460131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.360 [2024-11-19 14:15:34.464100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.360 [2024-11-19 14:15:34.464128] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:14:36.360 [2024-11-19 14:15:34.464142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.926 ms 00:14:36.360 [2024-11-19 14:15:34.464151] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.360 [2024-11-19 14:15:34.487285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.360 [2024-11-19 14:15:34.487313] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:14:36.360 [2024-11-19 14:15:34.487325] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.092 ms 00:14:36.360 [2024-11-19 14:15:34.487333] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.360 [2024-11-19 14:15:34.487393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.360 [2024-11-19 14:15:34.487403] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:14:36.360 [2024-11-19 14:15:34.487414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:14:36.360 [2024-11-19 14:15:34.487421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.360 [2024-11-19 14:15:34.487512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.360 [2024-11-19 14:15:34.487521] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:14:36.360 [2024-11-19 14:15:34.487534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:14:36.360 [2024-11-19 14:15:34.487541] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.361 [2024-11-19 14:15:34.488569] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3064.510 ms, result 0 00:14:36.361 { 00:14:36.361 "name": "ftl0", 00:14:36.361 "uuid": "a566c162-df9b-4752-81fe-6e97479b16d6" 00:14:36.361 } 00:14:36.361 14:15:34 -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:14:36.361 14:15:34 -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:14:36.361 14:15:34 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:36.361 14:15:34 -- common/autotest_common.sh@899 -- # local i 00:14:36.361 14:15:34 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:36.361 14:15:34 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:36.361 14:15:34 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:36.361 14:15:34 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:14:36.361 [ 00:14:36.361 { 00:14:36.361 "name": "ftl0", 00:14:36.361 "aliases": [ 00:14:36.361 "a566c162-df9b-4752-81fe-6e97479b16d6" 00:14:36.361 ], 00:14:36.361 "product_name": "FTL disk", 00:14:36.361 "block_size": 4096, 00:14:36.361 "num_blocks": 20971520, 00:14:36.361 "uuid": "a566c162-df9b-4752-81fe-6e97479b16d6", 00:14:36.361 "assigned_rate_limits": { 00:14:36.361 "rw_ios_per_sec": 0, 00:14:36.361 "rw_mbytes_per_sec": 0, 00:14:36.361 "r_mbytes_per_sec": 0, 00:14:36.361 "w_mbytes_per_sec": 0 00:14:36.361 }, 00:14:36.361 "claimed": false, 00:14:36.361 "zoned": false, 00:14:36.361 "supported_io_types": { 00:14:36.361 "read": true, 00:14:36.361 "write": true, 00:14:36.361 "unmap": true, 00:14:36.361 "write_zeroes": true, 00:14:36.361 "flush": true, 00:14:36.361 "reset": false, 00:14:36.361 "compare": false, 00:14:36.361 "compare_and_write": false, 00:14:36.362 "abort": false, 00:14:36.362 "nvme_admin": false, 00:14:36.362 "nvme_io": false 00:14:36.362 }, 00:14:36.362 "driver_specific": { 00:14:36.362 "ftl": { 00:14:36.362 "base_bdev": "41b9ec99-f21e-4ada-b396-73406753f5c6", 00:14:36.362 "cache": "nvc0n1p0" 00:14:36.362 } 00:14:36.362 } 00:14:36.362 } 00:14:36.362 ] 00:14:36.362 14:15:34 -- common/autotest_common.sh@905 -- # return 0 00:14:36.362 14:15:34 -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:14:36.362 14:15:34 -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:14:36.625 14:15:35 -- ftl/fio.sh@70 -- # echo ']}' 00:14:36.625 14:15:35 -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:14:36.886 [2024-11-19 14:15:35.249272] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.886 [2024-11-19 14:15:35.249309] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:14:36.886 [2024-11-19 14:15:35.249320] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:14:36.886 [2024-11-19 14:15:35.249330] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.886 [2024-11-19 14:15:35.249366] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:14:36.886 [2024-11-19 14:15:35.252109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.886 [2024-11-19 14:15:35.252134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:14:36.886 [2024-11-19 14:15:35.252149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.724 ms 00:14:36.886 [2024-11-19 14:15:35.252157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.886 [2024-11-19 14:15:35.252596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.886 [2024-11-19 14:15:35.252613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:14:36.886 [2024-11-19 14:15:35.252624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:14:36.886 [2024-11-19 14:15:35.252631] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.886 [2024-11-19 14:15:35.256087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.886 [2024-11-19 14:15:35.256105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:14:36.886 [2024-11-19 14:15:35.256116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.433 ms 00:14:36.886 [2024-11-19 14:15:35.256125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.886 [2024-11-19 14:15:35.262400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.886 [2024-11-19 14:15:35.262423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:14:36.886 [2024-11-19 14:15:35.262433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.245 ms 00:14:36.886 [2024-11-19 14:15:35.262441] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.886 [2024-11-19 14:15:35.284979] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.886 [2024-11-19 14:15:35.285002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:14:36.886 [2024-11-19 14:15:35.285011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.449 ms 00:14:36.886 [2024-11-19 14:15:35.285017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.886 [2024-11-19 14:15:35.297226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.886 [2024-11-19 14:15:35.297249] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:14:36.886 [2024-11-19 14:15:35.297271] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.169 ms 00:14:36.886 [2024-11-19 14:15:35.297277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.886 [2024-11-19 14:15:35.297420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.886 [2024-11-19 14:15:35.297429] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:14:36.886 [2024-11-19 14:15:35.297439] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:14:36.886 [2024-11-19 14:15:35.297445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.886 [2024-11-19 14:15:35.315580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.886 [2024-11-19 14:15:35.315601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:14:36.886 [2024-11-19 14:15:35.315611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.105 ms 00:14:36.886 [2024-11-19 14:15:35.315616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.886 [2024-11-19 14:15:35.333112] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.886 [2024-11-19 14:15:35.333133] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:14:36.886 [2024-11-19 14:15:35.333141] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.455 ms 00:14:36.886 [2024-11-19 14:15:35.333147] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.886 [2024-11-19 14:15:35.350204] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.886 [2024-11-19 14:15:35.350227] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:14:36.886 [2024-11-19 14:15:35.350236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.018 ms 00:14:36.886 [2024-11-19 14:15:35.350242] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.886 [2024-11-19 14:15:35.367466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.886 [2024-11-19 14:15:35.367489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:14:36.886 [2024-11-19 14:15:35.367498] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.146 ms 00:14:36.886 [2024-11-19 14:15:35.367503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.886 [2024-11-19 14:15:35.367536] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:14:36.886 [2024-11-19 14:15:35.367548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:14:36.886 [2024-11-19 14:15:35.367557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:14:36.886 [2024-11-19 14:15:35.367563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:14:36.886 [2024-11-19 14:15:35.367571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:14:36.886 [2024-11-19 14:15:35.367577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:14:36.886 [2024-11-19 14:15:35.367585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:14:36.886 [2024-11-19 14:15:35.367590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:14:36.886 [2024-11-19 14:15:35.367598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:14:36.886 [2024-11-19 14:15:35.367604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:14:36.886 [2024-11-19 14:15:35.367612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:14:36.886 [2024-11-19 14:15:35.367618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:14:36.886 [2024-11-19 14:15:35.367626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:14:36.886 [2024-11-19 14:15:35.367633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:14:36.886 [2024-11-19 14:15:35.367640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:14:36.886 [2024-11-19 14:15:35.367646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:14:36.886 [2024-11-19 14:15:35.367655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:14:36.886 [2024-11-19 14:15:35.367661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:14:36.886 [2024-11-19 14:15:35.367668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:14:36.886 [2024-11-19 14:15:35.367673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:14:36.886 [2024-11-19 14:15:35.367682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:14:36.886 [2024-11-19 14:15:35.367688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:14:36.886 [2024-11-19 14:15:35.367695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:14:36.886 [2024-11-19 14:15:35.367700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:14:36.886 [2024-11-19 14:15:35.367707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:14:36.886 [2024-11-19 14:15:35.367713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:14:36.886 [2024-11-19 14:15:35.367720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.367727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.367735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.367740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.367751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.367757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.367766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.367772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.367779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.367785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.367792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.367799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.367806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.367812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.367818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.367824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.367831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.367836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.367844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.367850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.367857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.367863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.367872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.367888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.367895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.367900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.367908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.367913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.367920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.367926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.367935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.367940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.367948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.367953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.367960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.367966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.367976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.367983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.367993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.367999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.368006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.368012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.368019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.368025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.368032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.368038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.368046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.368051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.368059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.368065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.368072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.368077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.368084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.368091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.368100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.368106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.368113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.368118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.368125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.368131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.368138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.368143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.368151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.368156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.368174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.368180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.368186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.368192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.368201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.368207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.368216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.368221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.368229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.368235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:14:36.887 [2024-11-19 14:15:35.368244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:14:36.888 [2024-11-19 14:15:35.368256] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:14:36.888 [2024-11-19 14:15:35.368263] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a566c162-df9b-4752-81fe-6e97479b16d6 00:14:36.888 [2024-11-19 14:15:35.368269] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:14:36.888 [2024-11-19 14:15:35.368277] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:14:36.888 [2024-11-19 14:15:35.368282] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:14:36.888 [2024-11-19 14:15:35.368289] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:14:36.888 [2024-11-19 14:15:35.368294] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:14:36.888 [2024-11-19 14:15:35.368302] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:14:36.888 [2024-11-19 14:15:35.368308] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:14:36.888 [2024-11-19 14:15:35.368314] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:14:36.888 [2024-11-19 14:15:35.368319] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:14:36.888 [2024-11-19 14:15:35.368327] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.888 [2024-11-19 14:15:35.368334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:14:36.888 [2024-11-19 14:15:35.368342] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.793 ms 00:14:36.888 [2024-11-19 14:15:35.368347] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.888 [2024-11-19 14:15:35.378572] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.888 [2024-11-19 14:15:35.378594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:14:36.888 [2024-11-19 14:15:35.378604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.194 ms 00:14:36.888 [2024-11-19 14:15:35.378610] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.888 [2024-11-19 14:15:35.378776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:36.888 [2024-11-19 14:15:35.378783] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:14:36.888 [2024-11-19 14:15:35.378791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:14:36.888 [2024-11-19 14:15:35.378797] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.888 [2024-11-19 14:15:35.415267] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:36.888 [2024-11-19 14:15:35.415292] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:36.888 [2024-11-19 14:15:35.415302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:36.888 [2024-11-19 14:15:35.415309] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.888 [2024-11-19 14:15:35.415372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:36.888 [2024-11-19 14:15:35.415378] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:36.888 [2024-11-19 14:15:35.415387] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:36.888 [2024-11-19 14:15:35.415393] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.888 [2024-11-19 14:15:35.415459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:36.888 [2024-11-19 14:15:35.415467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:36.888 [2024-11-19 14:15:35.415475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:36.888 [2024-11-19 14:15:35.415481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:36.888 [2024-11-19 14:15:35.415505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:36.888 [2024-11-19 14:15:35.415513] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:36.888 [2024-11-19 14:15:35.415521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:36.888 [2024-11-19 14:15:35.415526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.147 [2024-11-19 14:15:35.484370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:37.147 [2024-11-19 14:15:35.484405] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:37.147 [2024-11-19 14:15:35.484418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:37.147 [2024-11-19 14:15:35.484425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.147 [2024-11-19 14:15:35.507639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:37.147 [2024-11-19 14:15:35.507666] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:37.147 [2024-11-19 14:15:35.507677] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:37.147 [2024-11-19 14:15:35.507684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.147 [2024-11-19 14:15:35.507746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:37.147 [2024-11-19 14:15:35.507754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:37.147 [2024-11-19 14:15:35.507762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:37.147 [2024-11-19 14:15:35.507768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.147 [2024-11-19 14:15:35.507825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:37.147 [2024-11-19 14:15:35.507832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:37.147 [2024-11-19 14:15:35.507842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:37.147 [2024-11-19 14:15:35.507847] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.147 [2024-11-19 14:15:35.507944] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:37.147 [2024-11-19 14:15:35.507952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:37.147 [2024-11-19 14:15:35.507960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:37.147 [2024-11-19 14:15:35.507966] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.147 [2024-11-19 14:15:35.508010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:37.147 [2024-11-19 14:15:35.508017] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:14:37.147 [2024-11-19 14:15:35.508024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:37.147 [2024-11-19 14:15:35.508032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.147 [2024-11-19 14:15:35.508075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:37.147 [2024-11-19 14:15:35.508082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:37.147 [2024-11-19 14:15:35.508089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:37.147 [2024-11-19 14:15:35.508095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.147 [2024-11-19 14:15:35.508145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:37.147 [2024-11-19 14:15:35.508152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:37.147 [2024-11-19 14:15:35.508162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:37.147 [2024-11-19 14:15:35.508168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.147 [2024-11-19 14:15:35.508320] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 259.019 ms, result 0 00:14:37.147 true 00:14:37.147 14:15:35 -- ftl/fio.sh@75 -- # killprocess 70724 00:14:37.147 14:15:35 -- common/autotest_common.sh@936 -- # '[' -z 70724 ']' 00:14:37.147 14:15:35 -- common/autotest_common.sh@940 -- # kill -0 70724 00:14:37.147 14:15:35 -- common/autotest_common.sh@941 -- # uname 00:14:37.147 14:15:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:37.147 14:15:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70724 00:14:37.147 14:15:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:37.147 14:15:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:37.147 killing process with pid 70724 00:14:37.147 14:15:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70724' 00:14:37.147 14:15:35 -- common/autotest_common.sh@955 -- # kill 70724 00:14:37.147 14:15:35 -- common/autotest_common.sh@960 -- # wait 70724 00:14:42.417 14:15:40 -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:14:42.417 14:15:40 -- ftl/fio.sh@78 -- # for test in ${tests} 00:14:42.417 14:15:40 -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:14:42.417 14:15:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:42.417 14:15:40 -- common/autotest_common.sh@10 -- # set +x 00:14:42.417 14:15:40 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:42.417 14:15:40 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:42.417 14:15:40 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:14:42.417 14:15:40 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:42.417 14:15:40 -- common/autotest_common.sh@1328 -- # local sanitizers 00:14:42.417 14:15:40 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:42.417 14:15:40 -- common/autotest_common.sh@1330 -- # shift 00:14:42.417 14:15:40 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:14:42.417 14:15:40 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:42.417 14:15:40 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:42.417 14:15:40 -- common/autotest_common.sh@1334 -- # grep libasan 00:14:42.417 14:15:40 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:42.417 14:15:40 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:42.417 14:15:40 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:42.417 14:15:40 -- common/autotest_common.sh@1336 -- # break 00:14:42.417 14:15:40 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:42.417 14:15:40 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:42.678 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:14:42.678 fio-3.35 00:14:42.678 Starting 1 thread 00:14:47.968 00:14:47.968 test: (groupid=0, jobs=1): err= 0: pid=70950: Tue Nov 19 14:15:46 2024 00:14:47.968 read: IOPS=984, BW=65.4MiB/s (68.6MB/s)(255MiB/3892msec) 00:14:47.968 slat (nsec): min=3982, max=36966, avg=7287.60, stdev=3580.55 00:14:47.968 clat (usec): min=244, max=1508, avg=456.56, stdev=207.41 00:14:47.968 lat (usec): min=252, max=1528, avg=463.85, stdev=209.72 00:14:47.968 clat percentiles (usec): 00:14:47.968 | 1.00th=[ 262], 5.00th=[ 285], 10.00th=[ 289], 20.00th=[ 293], 00:14:47.968 | 30.00th=[ 306], 40.00th=[ 318], 50.00th=[ 347], 60.00th=[ 453], 00:14:47.968 | 70.00th=[ 553], 80.00th=[ 578], 90.00th=[ 816], 95.00th=[ 914], 00:14:47.968 | 99.00th=[ 1106], 99.50th=[ 1156], 99.90th=[ 1336], 99.95th=[ 1450], 00:14:47.968 | 99.99th=[ 1516] 00:14:47.968 write: IOPS=991, BW=65.8MiB/s (69.0MB/s)(256MiB/3889msec); 0 zone resets 00:14:47.968 slat (nsec): min=14483, max=67874, avg=21910.75, stdev=5840.62 00:14:47.968 clat (usec): min=262, max=1858, avg=512.48, stdev=242.32 00:14:47.968 lat (usec): min=280, max=1885, avg=534.39, stdev=245.84 00:14:47.968 clat percentiles (usec): 00:14:47.968 | 1.00th=[ 293], 5.00th=[ 310], 10.00th=[ 310], 20.00th=[ 318], 00:14:47.968 | 30.00th=[ 334], 40.00th=[ 351], 50.00th=[ 379], 60.00th=[ 494], 00:14:47.968 | 70.00th=[ 635], 80.00th=[ 676], 90.00th=[ 938], 95.00th=[ 996], 00:14:47.968 | 99.00th=[ 1254], 99.50th=[ 1385], 99.90th=[ 1778], 99.95th=[ 1860], 00:14:47.968 | 99.99th=[ 1860] 00:14:47.968 bw ( KiB/s): min=41208, max=103832, per=94.85%, avg=63954.14, stdev=24671.80, samples=7 00:14:47.968 iops : min= 606, max= 1526, avg=940.29, stdev=362.63, samples=7 00:14:47.968 lat (usec) : 250=0.10%, 500=62.14%, 750=25.84%, 1000=8.40% 00:14:47.968 lat (msec) : 2=3.51% 00:14:47.968 cpu : usr=99.25%, sys=0.05%, ctx=5, majf=0, minf=1318 00:14:47.968 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:47.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:47.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:47.968 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:47.968 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:47.968 00:14:47.968 Run status group 0 (all jobs): 00:14:47.968 READ: bw=65.4MiB/s (68.6MB/s), 65.4MiB/s-65.4MiB/s (68.6MB/s-68.6MB/s), io=255MiB (267MB), run=3892-3892msec 00:14:47.968 WRITE: bw=65.8MiB/s (69.0MB/s), 65.8MiB/s-65.8MiB/s (69.0MB/s-69.0MB/s), io=256MiB (269MB), run=3889-3889msec 00:14:49.358 ----------------------------------------------------- 00:14:49.358 Suppressions used: 00:14:49.358 count bytes template 00:14:49.358 1 5 /usr/src/fio/parse.c 00:14:49.358 1 8 libtcmalloc_minimal.so 00:14:49.358 1 904 libcrypto.so 00:14:49.358 ----------------------------------------------------- 00:14:49.358 00:14:49.358 14:15:47 -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:14:49.358 14:15:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:49.358 14:15:47 -- common/autotest_common.sh@10 -- # set +x 00:14:49.358 14:15:47 -- ftl/fio.sh@78 -- # for test in ${tests} 00:14:49.358 14:15:47 -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:14:49.358 14:15:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:49.358 14:15:47 -- common/autotest_common.sh@10 -- # set +x 00:14:49.358 14:15:47 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:49.358 14:15:47 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:49.358 14:15:47 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:14:49.358 14:15:47 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:49.358 14:15:47 -- common/autotest_common.sh@1328 -- # local sanitizers 00:14:49.358 14:15:47 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:49.358 14:15:47 -- common/autotest_common.sh@1330 -- # shift 00:14:49.358 14:15:47 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:14:49.358 14:15:47 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:49.358 14:15:47 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:49.358 14:15:47 -- common/autotest_common.sh@1334 -- # grep libasan 00:14:49.358 14:15:47 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:49.358 14:15:47 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:49.358 14:15:47 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:49.358 14:15:47 -- common/autotest_common.sh@1336 -- # break 00:14:49.358 14:15:47 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:49.358 14:15:47 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:49.358 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:14:49.358 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:14:49.358 fio-3.35 00:14:49.358 Starting 2 threads 00:15:15.929 00:15:15.929 first_half: (groupid=0, jobs=1): err= 0: pid=71054: Tue Nov 19 14:16:10 2024 00:15:15.929 read: IOPS=3012, BW=11.8MiB/s (12.3MB/s)(256MiB/21734msec) 00:15:15.929 slat (nsec): min=2980, max=69210, avg=4644.79, stdev=1112.13 00:15:15.929 clat (msec): min=6, max=309, avg=35.49, stdev=25.77 00:15:15.929 lat (msec): min=6, max=309, avg=35.49, stdev=25.77 00:15:15.929 clat percentiles (msec): 00:15:15.929 | 1.00th=[ 8], 5.00th=[ 26], 10.00th=[ 27], 20.00th=[ 29], 00:15:15.929 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 29], 00:15:15.929 | 70.00th=[ 32], 80.00th=[ 35], 90.00th=[ 43], 95.00th=[ 77], 00:15:15.929 | 99.00th=[ 167], 99.50th=[ 197], 99.90th=[ 249], 99.95th=[ 279], 00:15:15.929 | 99.99th=[ 305] 00:15:15.929 write: IOPS=3150, BW=12.3MiB/s (12.9MB/s)(256MiB/20803msec); 0 zone resets 00:15:15.929 slat (usec): min=3, max=412, avg= 6.03, stdev= 3.05 00:15:15.929 clat (usec): min=342, max=61367, avg=6971.32, stdev=7452.82 00:15:15.929 lat (usec): min=352, max=61372, avg=6977.35, stdev=7452.95 00:15:15.929 clat percentiles (usec): 00:15:15.929 | 1.00th=[ 717], 5.00th=[ 971], 10.00th=[ 1352], 20.00th=[ 2474], 00:15:15.929 | 30.00th=[ 3425], 40.00th=[ 4228], 50.00th=[ 5211], 60.00th=[ 5800], 00:15:15.929 | 70.00th=[ 6849], 80.00th=[ 8455], 90.00th=[13566], 95.00th=[23462], 00:15:15.929 | 99.00th=[38536], 99.50th=[42206], 99.90th=[58459], 99.95th=[58983], 00:15:15.929 | 99.99th=[60556] 00:15:15.929 bw ( KiB/s): min= 104, max=55696, per=97.50%, avg=23831.27, stdev=15733.54, samples=22 00:15:15.929 iops : min= 26, max=13924, avg=5957.82, stdev=3933.38, samples=22 00:15:15.929 lat (usec) : 500=0.03%, 750=0.83%, 1000=1.78% 00:15:15.929 lat (msec) : 2=5.75%, 4=10.32%, 10=23.65%, 20=6.47%, 50=47.21% 00:15:15.929 lat (msec) : 100=2.11%, 250=1.81%, 500=0.05% 00:15:15.929 cpu : usr=99.43%, sys=0.11%, ctx=35, majf=0, minf=5534 00:15:15.929 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:15.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:15.929 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:15.929 issued rwts: total=65464,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:15.929 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:15.929 second_half: (groupid=0, jobs=1): err= 0: pid=71055: Tue Nov 19 14:16:10 2024 00:15:15.929 read: IOPS=3037, BW=11.9MiB/s (12.4MB/s)(256MiB/21563msec) 00:15:15.929 slat (nsec): min=2971, max=20193, avg=5149.73, stdev=937.46 00:15:15.929 clat (msec): min=9, max=267, avg=35.90, stdev=22.79 00:15:15.929 lat (msec): min=9, max=267, avg=35.91, stdev=22.79 00:15:15.929 clat percentiles (msec): 00:15:15.929 | 1.00th=[ 26], 5.00th=[ 26], 10.00th=[ 28], 20.00th=[ 29], 00:15:15.929 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 30], 00:15:15.929 | 70.00th=[ 33], 80.00th=[ 35], 90.00th=[ 45], 95.00th=[ 71], 00:15:15.929 | 99.00th=[ 161], 99.50th=[ 180], 99.90th=[ 209], 99.95th=[ 232], 00:15:15.929 | 99.99th=[ 259] 00:15:15.929 write: IOPS=3055, BW=11.9MiB/s (12.5MB/s)(256MiB/21450msec); 0 zone resets 00:15:15.929 slat (usec): min=3, max=2133, avg= 6.07, stdev=10.27 00:15:15.929 clat (usec): min=309, max=50296, avg=6221.90, stdev=4628.03 00:15:15.929 lat (usec): min=319, max=50301, avg=6227.97, stdev=4628.27 00:15:15.929 clat percentiles (usec): 00:15:15.929 | 1.00th=[ 750], 5.00th=[ 1926], 10.00th=[ 2507], 20.00th=[ 3163], 00:15:15.929 | 30.00th=[ 3785], 40.00th=[ 4490], 50.00th=[ 5145], 60.00th=[ 5604], 00:15:15.929 | 70.00th=[ 6652], 80.00th=[ 8586], 90.00th=[11994], 95.00th=[13435], 00:15:15.929 | 99.00th=[28181], 99.50th=[32900], 99.90th=[44827], 99.95th=[46924], 00:15:15.929 | 99.99th=[49546] 00:15:15.929 bw ( KiB/s): min= 2008, max=46200, per=100.00%, avg=26214.40, stdev=14402.94, samples=20 00:15:15.929 iops : min= 502, max=11550, avg=6553.60, stdev=3600.73, samples=20 00:15:15.929 lat (usec) : 500=0.04%, 750=0.47%, 1000=0.47% 00:15:15.929 lat (msec) : 2=1.72%, 4=13.83%, 10=25.43%, 20=7.38%, 50=46.65% 00:15:15.929 lat (msec) : 100=2.44%, 250=1.57%, 500=0.01% 00:15:15.929 cpu : usr=99.46%, sys=0.13%, ctx=29, majf=0, minf=5581 00:15:15.929 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:15.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:15.929 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:15.929 issued rwts: total=65489,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:15.929 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:15.929 00:15:15.929 Run status group 0 (all jobs): 00:15:15.929 READ: bw=23.5MiB/s (24.7MB/s), 11.8MiB/s-11.9MiB/s (12.3MB/s-12.4MB/s), io=512MiB (536MB), run=21563-21734msec 00:15:15.929 WRITE: bw=23.9MiB/s (25.0MB/s), 11.9MiB/s-12.3MiB/s (12.5MB/s-12.9MB/s), io=512MiB (537MB), run=20803-21450msec 00:15:15.929 ----------------------------------------------------- 00:15:15.929 Suppressions used: 00:15:15.929 count bytes template 00:15:15.929 2 10 /usr/src/fio/parse.c 00:15:15.929 2 192 /usr/src/fio/iolog.c 00:15:15.929 1 8 libtcmalloc_minimal.so 00:15:15.929 1 904 libcrypto.so 00:15:15.929 ----------------------------------------------------- 00:15:15.929 00:15:15.929 14:16:13 -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:15:15.929 14:16:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:15.929 14:16:13 -- common/autotest_common.sh@10 -- # set +x 00:15:15.929 14:16:13 -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:15.929 14:16:13 -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:15:15.929 14:16:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:15.929 14:16:13 -- common/autotest_common.sh@10 -- # set +x 00:15:15.929 14:16:13 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:15.929 14:16:13 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:15.929 14:16:13 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:15.929 14:16:13 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:15.929 14:16:13 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:15.929 14:16:13 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:15.930 14:16:13 -- common/autotest_common.sh@1330 -- # shift 00:15:15.930 14:16:13 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:15.930 14:16:13 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:15.930 14:16:13 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:15.930 14:16:13 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:15.930 14:16:13 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:15.930 14:16:13 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:15.930 14:16:13 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:15.930 14:16:13 -- common/autotest_common.sh@1336 -- # break 00:15:15.930 14:16:13 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:15.930 14:16:13 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:15.930 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:15.930 fio-3.35 00:15:15.930 Starting 1 thread 00:15:34.049 00:15:34.050 test: (groupid=0, jobs=1): err= 0: pid=71351: Tue Nov 19 14:16:29 2024 00:15:34.050 read: IOPS=7184, BW=28.1MiB/s (29.4MB/s)(255MiB/9075msec) 00:15:34.050 slat (nsec): min=2992, max=20092, avg=4586.12, stdev=1097.78 00:15:34.050 clat (usec): min=762, max=38684, avg=17806.48, stdev=2659.84 00:15:34.050 lat (usec): min=770, max=38689, avg=17811.06, stdev=2659.81 00:15:34.050 clat percentiles (usec): 00:15:34.050 | 1.00th=[14353], 5.00th=[14746], 10.00th=[15008], 20.00th=[15533], 00:15:34.050 | 30.00th=[16057], 40.00th=[16581], 50.00th=[17171], 60.00th=[17957], 00:15:34.050 | 70.00th=[18744], 80.00th=[19792], 90.00th=[21103], 95.00th=[22676], 00:15:34.050 | 99.00th=[26346], 99.50th=[27919], 99.90th=[30802], 99.95th=[33817], 00:15:34.050 | 99.99th=[38011] 00:15:34.050 write: IOPS=11.1k, BW=43.3MiB/s (45.4MB/s)(256MiB/5907msec); 0 zone resets 00:15:34.050 slat (usec): min=4, max=433, avg= 6.41, stdev= 4.21 00:15:34.050 clat (usec): min=513, max=67990, avg=11473.65, stdev=13297.11 00:15:34.050 lat (usec): min=520, max=67998, avg=11480.06, stdev=13297.12 00:15:34.050 clat percentiles (usec): 00:15:34.050 | 1.00th=[ 832], 5.00th=[ 1074], 10.00th=[ 1205], 20.00th=[ 1385], 00:15:34.050 | 30.00th=[ 1631], 40.00th=[ 2507], 50.00th=[ 7046], 60.00th=[ 9634], 00:15:34.050 | 70.00th=[12256], 80.00th=[16057], 90.00th=[37487], 95.00th=[40109], 00:15:34.050 | 99.00th=[47973], 99.50th=[50070], 99.90th=[55313], 99.95th=[56886], 00:15:34.050 | 99.99th=[66323] 00:15:34.050 bw ( KiB/s): min=32968, max=59504, per=98.45%, avg=43690.67, stdev=7993.23, samples=12 00:15:34.050 iops : min= 8242, max=14876, avg=10922.67, stdev=1998.31, samples=12 00:15:34.050 lat (usec) : 750=0.22%, 1000=1.43% 00:15:34.050 lat (msec) : 2=16.85%, 4=2.29%, 10=10.13%, 20=51.95%, 50=16.86% 00:15:34.050 lat (msec) : 100=0.27% 00:15:34.050 cpu : usr=99.31%, sys=0.16%, ctx=33, majf=0, minf=5567 00:15:34.050 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:34.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:34.050 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:34.050 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:34.050 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:34.050 00:15:34.050 Run status group 0 (all jobs): 00:15:34.050 READ: bw=28.1MiB/s (29.4MB/s), 28.1MiB/s-28.1MiB/s (29.4MB/s-29.4MB/s), io=255MiB (267MB), run=9075-9075msec 00:15:34.050 WRITE: bw=43.3MiB/s (45.4MB/s), 43.3MiB/s-43.3MiB/s (45.4MB/s-45.4MB/s), io=256MiB (268MB), run=5907-5907msec 00:15:34.050 ----------------------------------------------------- 00:15:34.050 Suppressions used: 00:15:34.050 count bytes template 00:15:34.050 1 5 /usr/src/fio/parse.c 00:15:34.050 2 192 /usr/src/fio/iolog.c 00:15:34.050 1 8 libtcmalloc_minimal.so 00:15:34.050 1 904 libcrypto.so 00:15:34.050 ----------------------------------------------------- 00:15:34.050 00:15:34.050 14:16:31 -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:15:34.050 14:16:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:34.050 14:16:31 -- common/autotest_common.sh@10 -- # set +x 00:15:34.050 14:16:31 -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:34.050 Remove shared memory files 00:15:34.050 14:16:31 -- ftl/fio.sh@85 -- # remove_shm 00:15:34.050 14:16:31 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:15:34.050 14:16:31 -- ftl/common.sh@205 -- # rm -f rm -f 00:15:34.050 14:16:31 -- ftl/common.sh@206 -- # rm -f rm -f 00:15:34.050 14:16:31 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid56186 /dev/shm/spdk_tgt_trace.pid69618 00:15:34.050 14:16:31 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:15:34.050 14:16:31 -- ftl/common.sh@209 -- # rm -f rm -f 00:15:34.050 ************************************ 00:15:34.050 END TEST ftl_fio_basic 00:15:34.050 ************************************ 00:15:34.050 00:15:34.050 real 1m3.847s 00:15:34.050 user 2m9.238s 00:15:34.050 sys 0m11.301s 00:15:34.050 14:16:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:34.050 14:16:31 -- common/autotest_common.sh@10 -- # set +x 00:15:34.050 14:16:31 -- ftl/ftl.sh@75 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:15:34.050 14:16:31 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:34.050 14:16:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:34.050 14:16:31 -- common/autotest_common.sh@10 -- # set +x 00:15:34.050 ************************************ 00:15:34.050 START TEST ftl_bdevperf 00:15:34.050 ************************************ 00:15:34.050 14:16:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:15:34.050 * Looking for test storage... 00:15:34.050 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:34.050 14:16:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:34.050 14:16:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:34.050 14:16:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:34.050 14:16:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:34.050 14:16:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:34.050 14:16:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:34.050 14:16:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:34.050 14:16:31 -- scripts/common.sh@335 -- # IFS=.-: 00:15:34.050 14:16:31 -- scripts/common.sh@335 -- # read -ra ver1 00:15:34.050 14:16:31 -- scripts/common.sh@336 -- # IFS=.-: 00:15:34.050 14:16:31 -- scripts/common.sh@336 -- # read -ra ver2 00:15:34.050 14:16:31 -- scripts/common.sh@337 -- # local 'op=<' 00:15:34.050 14:16:31 -- scripts/common.sh@339 -- # ver1_l=2 00:15:34.050 14:16:31 -- scripts/common.sh@340 -- # ver2_l=1 00:15:34.050 14:16:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:34.050 14:16:31 -- scripts/common.sh@343 -- # case "$op" in 00:15:34.050 14:16:31 -- scripts/common.sh@344 -- # : 1 00:15:34.050 14:16:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:34.050 14:16:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:34.050 14:16:31 -- scripts/common.sh@364 -- # decimal 1 00:15:34.050 14:16:31 -- scripts/common.sh@352 -- # local d=1 00:15:34.050 14:16:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:34.050 14:16:31 -- scripts/common.sh@354 -- # echo 1 00:15:34.050 14:16:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:34.050 14:16:31 -- scripts/common.sh@365 -- # decimal 2 00:15:34.050 14:16:31 -- scripts/common.sh@352 -- # local d=2 00:15:34.050 14:16:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:34.050 14:16:31 -- scripts/common.sh@354 -- # echo 2 00:15:34.050 14:16:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:34.050 14:16:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:34.050 14:16:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:34.050 14:16:31 -- scripts/common.sh@367 -- # return 0 00:15:34.050 14:16:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:34.050 14:16:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:34.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.050 --rc genhtml_branch_coverage=1 00:15:34.050 --rc genhtml_function_coverage=1 00:15:34.050 --rc genhtml_legend=1 00:15:34.050 --rc geninfo_all_blocks=1 00:15:34.050 --rc geninfo_unexecuted_blocks=1 00:15:34.050 00:15:34.050 ' 00:15:34.050 14:16:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:34.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.050 --rc genhtml_branch_coverage=1 00:15:34.050 --rc genhtml_function_coverage=1 00:15:34.050 --rc genhtml_legend=1 00:15:34.050 --rc geninfo_all_blocks=1 00:15:34.050 --rc geninfo_unexecuted_blocks=1 00:15:34.050 00:15:34.050 ' 00:15:34.050 14:16:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:34.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.050 --rc genhtml_branch_coverage=1 00:15:34.050 --rc genhtml_function_coverage=1 00:15:34.050 --rc genhtml_legend=1 00:15:34.050 --rc geninfo_all_blocks=1 00:15:34.050 --rc geninfo_unexecuted_blocks=1 00:15:34.050 00:15:34.050 ' 00:15:34.050 14:16:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:34.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.050 --rc genhtml_branch_coverage=1 00:15:34.050 --rc genhtml_function_coverage=1 00:15:34.050 --rc genhtml_legend=1 00:15:34.050 --rc geninfo_all_blocks=1 00:15:34.050 --rc geninfo_unexecuted_blocks=1 00:15:34.050 00:15:34.050 ' 00:15:34.050 14:16:31 -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:34.050 14:16:31 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:15:34.050 14:16:31 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:34.050 14:16:31 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:34.050 14:16:31 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:34.050 14:16:31 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:34.050 14:16:31 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:34.050 14:16:31 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:34.050 14:16:31 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:34.050 14:16:31 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:34.050 14:16:31 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:34.050 14:16:31 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:34.050 14:16:31 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:34.050 14:16:31 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:34.050 14:16:31 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:34.050 14:16:31 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:34.050 14:16:31 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:34.050 14:16:31 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:34.051 14:16:31 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:34.051 14:16:31 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:34.051 14:16:31 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:34.051 14:16:31 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:34.051 14:16:31 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:34.051 14:16:31 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:34.051 14:16:31 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:34.051 14:16:31 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:34.051 14:16:31 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:34.051 14:16:31 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:34.051 14:16:31 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:34.051 14:16:31 -- ftl/bdevperf.sh@11 -- # device=0000:00:07.0 00:15:34.051 14:16:31 -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:06.0 00:15:34.051 14:16:31 -- ftl/bdevperf.sh@13 -- # use_append= 00:15:34.051 14:16:31 -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:34.051 14:16:31 -- ftl/bdevperf.sh@15 -- # timeout=240 00:15:34.051 14:16:31 -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:15:34.051 14:16:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:34.051 14:16:31 -- common/autotest_common.sh@10 -- # set +x 00:15:34.051 14:16:31 -- ftl/bdevperf.sh@19 -- # bdevperf_pid=71606 00:15:34.051 14:16:31 -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:15:34.051 14:16:31 -- ftl/bdevperf.sh@22 -- # waitforlisten 71606 00:15:34.051 14:16:31 -- common/autotest_common.sh@829 -- # '[' -z 71606 ']' 00:15:34.051 14:16:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.051 14:16:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:34.051 14:16:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.051 14:16:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:34.051 14:16:31 -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:15:34.051 14:16:31 -- common/autotest_common.sh@10 -- # set +x 00:15:34.051 [2024-11-19 14:16:31.473483] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:34.051 [2024-11-19 14:16:31.473630] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71606 ] 00:15:34.051 [2024-11-19 14:16:31.624867] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.051 [2024-11-19 14:16:31.848475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.051 14:16:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:34.051 14:16:32 -- common/autotest_common.sh@862 -- # return 0 00:15:34.051 14:16:32 -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:15:34.051 14:16:32 -- ftl/common.sh@54 -- # local name=nvme0 00:15:34.051 14:16:32 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:15:34.051 14:16:32 -- ftl/common.sh@56 -- # local size=103424 00:15:34.051 14:16:32 -- ftl/common.sh@59 -- # local base_bdev 00:15:34.051 14:16:32 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:15:34.051 14:16:32 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:34.051 14:16:32 -- ftl/common.sh@62 -- # local base_size 00:15:34.051 14:16:32 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:34.051 14:16:32 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:15:34.051 14:16:32 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:34.051 14:16:32 -- common/autotest_common.sh@1369 -- # local bs 00:15:34.051 14:16:32 -- common/autotest_common.sh@1370 -- # local nb 00:15:34.051 14:16:32 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:34.313 14:16:32 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:34.313 { 00:15:34.313 "name": "nvme0n1", 00:15:34.313 "aliases": [ 00:15:34.313 "9091f62b-de70-4147-9778-efda8b1d38b6" 00:15:34.313 ], 00:15:34.313 "product_name": "NVMe disk", 00:15:34.313 "block_size": 4096, 00:15:34.313 "num_blocks": 1310720, 00:15:34.313 "uuid": "9091f62b-de70-4147-9778-efda8b1d38b6", 00:15:34.313 "assigned_rate_limits": { 00:15:34.313 "rw_ios_per_sec": 0, 00:15:34.313 "rw_mbytes_per_sec": 0, 00:15:34.313 "r_mbytes_per_sec": 0, 00:15:34.313 "w_mbytes_per_sec": 0 00:15:34.313 }, 00:15:34.313 "claimed": true, 00:15:34.313 "claim_type": "read_many_write_one", 00:15:34.313 "zoned": false, 00:15:34.313 "supported_io_types": { 00:15:34.313 "read": true, 00:15:34.313 "write": true, 00:15:34.313 "unmap": true, 00:15:34.313 "write_zeroes": true, 00:15:34.313 "flush": true, 00:15:34.313 "reset": true, 00:15:34.313 "compare": true, 00:15:34.313 "compare_and_write": false, 00:15:34.313 "abort": true, 00:15:34.313 "nvme_admin": true, 00:15:34.313 "nvme_io": true 00:15:34.313 }, 00:15:34.313 "driver_specific": { 00:15:34.313 "nvme": [ 00:15:34.313 { 00:15:34.313 "pci_address": "0000:00:07.0", 00:15:34.313 "trid": { 00:15:34.313 "trtype": "PCIe", 00:15:34.313 "traddr": "0000:00:07.0" 00:15:34.313 }, 00:15:34.313 "ctrlr_data": { 00:15:34.313 "cntlid": 0, 00:15:34.313 "vendor_id": "0x1b36", 00:15:34.313 "model_number": "QEMU NVMe Ctrl", 00:15:34.313 "serial_number": "12341", 00:15:34.313 "firmware_revision": "8.0.0", 00:15:34.313 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:34.313 "oacs": { 00:15:34.313 "security": 0, 00:15:34.313 "format": 1, 00:15:34.313 "firmware": 0, 00:15:34.313 "ns_manage": 1 00:15:34.313 }, 00:15:34.313 "multi_ctrlr": false, 00:15:34.313 "ana_reporting": false 00:15:34.313 }, 00:15:34.313 "vs": { 00:15:34.313 "nvme_version": "1.4" 00:15:34.313 }, 00:15:34.313 "ns_data": { 00:15:34.313 "id": 1, 00:15:34.313 "can_share": false 00:15:34.313 } 00:15:34.313 } 00:15:34.313 ], 00:15:34.313 "mp_policy": "active_passive" 00:15:34.313 } 00:15:34.313 } 00:15:34.313 ]' 00:15:34.313 14:16:32 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:34.313 14:16:32 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:34.313 14:16:32 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:34.313 14:16:32 -- common/autotest_common.sh@1373 -- # nb=1310720 00:15:34.313 14:16:32 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:15:34.313 14:16:32 -- common/autotest_common.sh@1377 -- # echo 5120 00:15:34.313 14:16:32 -- ftl/common.sh@63 -- # base_size=5120 00:15:34.313 14:16:32 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:34.313 14:16:32 -- ftl/common.sh@67 -- # clear_lvols 00:15:34.574 14:16:32 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:34.574 14:16:32 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:34.574 14:16:33 -- ftl/common.sh@28 -- # stores=c4b8b2a2-3b7c-40db-9c7e-7d7c5af3b77e 00:15:34.574 14:16:33 -- ftl/common.sh@29 -- # for lvs in $stores 00:15:34.574 14:16:33 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c4b8b2a2-3b7c-40db-9c7e-7d7c5af3b77e 00:15:34.834 14:16:33 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:35.095 14:16:33 -- ftl/common.sh@68 -- # lvs=720097cd-1e2e-4936-9815-4978edb5f591 00:15:35.095 14:16:33 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 720097cd-1e2e-4936-9815-4978edb5f591 00:15:35.357 14:16:33 -- ftl/bdevperf.sh@23 -- # split_bdev=74136da6-038b-4d52-98dd-8a2defcd9e76 00:15:35.357 14:16:33 -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:06.0 74136da6-038b-4d52-98dd-8a2defcd9e76 00:15:35.357 14:16:33 -- ftl/common.sh@35 -- # local name=nvc0 00:15:35.357 14:16:33 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:15:35.357 14:16:33 -- ftl/common.sh@37 -- # local base_bdev=74136da6-038b-4d52-98dd-8a2defcd9e76 00:15:35.357 14:16:33 -- ftl/common.sh@38 -- # local cache_size= 00:15:35.357 14:16:33 -- ftl/common.sh@41 -- # get_bdev_size 74136da6-038b-4d52-98dd-8a2defcd9e76 00:15:35.357 14:16:33 -- common/autotest_common.sh@1367 -- # local bdev_name=74136da6-038b-4d52-98dd-8a2defcd9e76 00:15:35.357 14:16:33 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:35.357 14:16:33 -- common/autotest_common.sh@1369 -- # local bs 00:15:35.357 14:16:33 -- common/autotest_common.sh@1370 -- # local nb 00:15:35.357 14:16:33 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 74136da6-038b-4d52-98dd-8a2defcd9e76 00:15:35.357 14:16:33 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:35.357 { 00:15:35.357 "name": "74136da6-038b-4d52-98dd-8a2defcd9e76", 00:15:35.357 "aliases": [ 00:15:35.357 "lvs/nvme0n1p0" 00:15:35.357 ], 00:15:35.357 "product_name": "Logical Volume", 00:15:35.357 "block_size": 4096, 00:15:35.357 "num_blocks": 26476544, 00:15:35.357 "uuid": "74136da6-038b-4d52-98dd-8a2defcd9e76", 00:15:35.357 "assigned_rate_limits": { 00:15:35.357 "rw_ios_per_sec": 0, 00:15:35.357 "rw_mbytes_per_sec": 0, 00:15:35.357 "r_mbytes_per_sec": 0, 00:15:35.357 "w_mbytes_per_sec": 0 00:15:35.357 }, 00:15:35.357 "claimed": false, 00:15:35.357 "zoned": false, 00:15:35.357 "supported_io_types": { 00:15:35.357 "read": true, 00:15:35.357 "write": true, 00:15:35.357 "unmap": true, 00:15:35.357 "write_zeroes": true, 00:15:35.357 "flush": false, 00:15:35.357 "reset": true, 00:15:35.357 "compare": false, 00:15:35.357 "compare_and_write": false, 00:15:35.357 "abort": false, 00:15:35.357 "nvme_admin": false, 00:15:35.357 "nvme_io": false 00:15:35.357 }, 00:15:35.357 "driver_specific": { 00:15:35.357 "lvol": { 00:15:35.357 "lvol_store_uuid": "720097cd-1e2e-4936-9815-4978edb5f591", 00:15:35.357 "base_bdev": "nvme0n1", 00:15:35.357 "thin_provision": true, 00:15:35.357 "snapshot": false, 00:15:35.357 "clone": false, 00:15:35.357 "esnap_clone": false 00:15:35.357 } 00:15:35.357 } 00:15:35.357 } 00:15:35.357 ]' 00:15:35.357 14:16:33 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:35.618 14:16:33 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:35.618 14:16:33 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:35.618 14:16:33 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:35.618 14:16:33 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:35.618 14:16:33 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:35.618 14:16:33 -- ftl/common.sh@41 -- # local base_size=5171 00:15:35.618 14:16:33 -- ftl/common.sh@44 -- # local nvc_bdev 00:15:35.618 14:16:33 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:15:35.879 14:16:34 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:35.879 14:16:34 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:35.879 14:16:34 -- ftl/common.sh@48 -- # get_bdev_size 74136da6-038b-4d52-98dd-8a2defcd9e76 00:15:35.879 14:16:34 -- common/autotest_common.sh@1367 -- # local bdev_name=74136da6-038b-4d52-98dd-8a2defcd9e76 00:15:35.879 14:16:34 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:35.879 14:16:34 -- common/autotest_common.sh@1369 -- # local bs 00:15:35.879 14:16:34 -- common/autotest_common.sh@1370 -- # local nb 00:15:35.879 14:16:34 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 74136da6-038b-4d52-98dd-8a2defcd9e76 00:15:35.879 14:16:34 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:35.879 { 00:15:35.879 "name": "74136da6-038b-4d52-98dd-8a2defcd9e76", 00:15:35.879 "aliases": [ 00:15:35.879 "lvs/nvme0n1p0" 00:15:35.879 ], 00:15:35.879 "product_name": "Logical Volume", 00:15:35.879 "block_size": 4096, 00:15:35.879 "num_blocks": 26476544, 00:15:35.879 "uuid": "74136da6-038b-4d52-98dd-8a2defcd9e76", 00:15:35.879 "assigned_rate_limits": { 00:15:35.879 "rw_ios_per_sec": 0, 00:15:35.879 "rw_mbytes_per_sec": 0, 00:15:35.879 "r_mbytes_per_sec": 0, 00:15:35.879 "w_mbytes_per_sec": 0 00:15:35.879 }, 00:15:35.879 "claimed": false, 00:15:35.879 "zoned": false, 00:15:35.879 "supported_io_types": { 00:15:35.879 "read": true, 00:15:35.879 "write": true, 00:15:35.879 "unmap": true, 00:15:35.879 "write_zeroes": true, 00:15:35.879 "flush": false, 00:15:35.879 "reset": true, 00:15:35.879 "compare": false, 00:15:35.879 "compare_and_write": false, 00:15:35.879 "abort": false, 00:15:35.879 "nvme_admin": false, 00:15:35.879 "nvme_io": false 00:15:35.879 }, 00:15:35.879 "driver_specific": { 00:15:35.879 "lvol": { 00:15:35.879 "lvol_store_uuid": "720097cd-1e2e-4936-9815-4978edb5f591", 00:15:35.879 "base_bdev": "nvme0n1", 00:15:35.879 "thin_provision": true, 00:15:35.879 "snapshot": false, 00:15:35.879 "clone": false, 00:15:35.879 "esnap_clone": false 00:15:35.879 } 00:15:35.879 } 00:15:35.879 } 00:15:35.879 ]' 00:15:35.879 14:16:34 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:35.879 14:16:34 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:35.879 14:16:34 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:36.195 14:16:34 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:36.195 14:16:34 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:36.195 14:16:34 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:36.195 14:16:34 -- ftl/common.sh@48 -- # cache_size=5171 00:15:36.195 14:16:34 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:36.195 14:16:34 -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:15:36.195 14:16:34 -- ftl/bdevperf.sh@26 -- # get_bdev_size 74136da6-038b-4d52-98dd-8a2defcd9e76 00:15:36.195 14:16:34 -- common/autotest_common.sh@1367 -- # local bdev_name=74136da6-038b-4d52-98dd-8a2defcd9e76 00:15:36.195 14:16:34 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:36.195 14:16:34 -- common/autotest_common.sh@1369 -- # local bs 00:15:36.195 14:16:34 -- common/autotest_common.sh@1370 -- # local nb 00:15:36.195 14:16:34 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 74136da6-038b-4d52-98dd-8a2defcd9e76 00:15:36.463 14:16:34 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:36.463 { 00:15:36.463 "name": "74136da6-038b-4d52-98dd-8a2defcd9e76", 00:15:36.463 "aliases": [ 00:15:36.463 "lvs/nvme0n1p0" 00:15:36.463 ], 00:15:36.463 "product_name": "Logical Volume", 00:15:36.463 "block_size": 4096, 00:15:36.463 "num_blocks": 26476544, 00:15:36.463 "uuid": "74136da6-038b-4d52-98dd-8a2defcd9e76", 00:15:36.463 "assigned_rate_limits": { 00:15:36.463 "rw_ios_per_sec": 0, 00:15:36.463 "rw_mbytes_per_sec": 0, 00:15:36.463 "r_mbytes_per_sec": 0, 00:15:36.463 "w_mbytes_per_sec": 0 00:15:36.463 }, 00:15:36.463 "claimed": false, 00:15:36.463 "zoned": false, 00:15:36.463 "supported_io_types": { 00:15:36.463 "read": true, 00:15:36.463 "write": true, 00:15:36.463 "unmap": true, 00:15:36.463 "write_zeroes": true, 00:15:36.463 "flush": false, 00:15:36.463 "reset": true, 00:15:36.463 "compare": false, 00:15:36.463 "compare_and_write": false, 00:15:36.463 "abort": false, 00:15:36.463 "nvme_admin": false, 00:15:36.463 "nvme_io": false 00:15:36.463 }, 00:15:36.463 "driver_specific": { 00:15:36.463 "lvol": { 00:15:36.463 "lvol_store_uuid": "720097cd-1e2e-4936-9815-4978edb5f591", 00:15:36.463 "base_bdev": "nvme0n1", 00:15:36.463 "thin_provision": true, 00:15:36.463 "snapshot": false, 00:15:36.463 "clone": false, 00:15:36.463 "esnap_clone": false 00:15:36.463 } 00:15:36.463 } 00:15:36.463 } 00:15:36.463 ]' 00:15:36.463 14:16:34 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:36.463 14:16:34 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:36.463 14:16:34 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:36.463 14:16:34 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:36.463 14:16:34 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:36.463 14:16:34 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:36.463 14:16:34 -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:15:36.463 14:16:34 -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 74136da6-038b-4d52-98dd-8a2defcd9e76 -c nvc0n1p0 --l2p_dram_limit 20 00:15:36.726 [2024-11-19 14:16:35.088821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.726 [2024-11-19 14:16:35.088861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:36.726 [2024-11-19 14:16:35.088873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:36.726 [2024-11-19 14:16:35.088894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.726 [2024-11-19 14:16:35.088934] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.726 [2024-11-19 14:16:35.088942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:36.726 [2024-11-19 14:16:35.088949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:15:36.726 [2024-11-19 14:16:35.088955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.726 [2024-11-19 14:16:35.088969] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:36.726 [2024-11-19 14:16:35.089540] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:36.726 [2024-11-19 14:16:35.089561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.726 [2024-11-19 14:16:35.089567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:36.726 [2024-11-19 14:16:35.089576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:15:36.726 [2024-11-19 14:16:35.089582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.726 [2024-11-19 14:16:35.089629] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 47a5a93d-f60f-4563-be6b-84a5c4a6c9fb 00:15:36.726 [2024-11-19 14:16:35.090580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.726 [2024-11-19 14:16:35.090609] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:36.726 [2024-11-19 14:16:35.090618] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:15:36.726 [2024-11-19 14:16:35.090625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.726 [2024-11-19 14:16:35.095456] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.726 [2024-11-19 14:16:35.095485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:36.726 [2024-11-19 14:16:35.095493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.804 ms 00:15:36.726 [2024-11-19 14:16:35.095500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.726 [2024-11-19 14:16:35.095564] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.726 [2024-11-19 14:16:35.095572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:36.726 [2024-11-19 14:16:35.095579] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:15:36.726 [2024-11-19 14:16:35.095588] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.726 [2024-11-19 14:16:35.095624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.726 [2024-11-19 14:16:35.095633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:36.726 [2024-11-19 14:16:35.095641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:36.726 [2024-11-19 14:16:35.095648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.726 [2024-11-19 14:16:35.095665] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:36.726 [2024-11-19 14:16:35.098738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.726 [2024-11-19 14:16:35.098763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:36.726 [2024-11-19 14:16:35.098771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.076 ms 00:15:36.726 [2024-11-19 14:16:35.098777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.726 [2024-11-19 14:16:35.098805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.726 [2024-11-19 14:16:35.098811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:36.726 [2024-11-19 14:16:35.098819] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:36.726 [2024-11-19 14:16:35.098824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.726 [2024-11-19 14:16:35.098842] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:36.726 [2024-11-19 14:16:35.098943] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:36.726 [2024-11-19 14:16:35.098956] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:36.726 [2024-11-19 14:16:35.098965] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:36.726 [2024-11-19 14:16:35.098974] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:36.726 [2024-11-19 14:16:35.098981] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:36.726 [2024-11-19 14:16:35.098988] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:36.726 [2024-11-19 14:16:35.098994] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:36.726 [2024-11-19 14:16:35.099004] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:36.726 [2024-11-19 14:16:35.099009] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:36.726 [2024-11-19 14:16:35.099017] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.726 [2024-11-19 14:16:35.099022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:36.726 [2024-11-19 14:16:35.099030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:15:36.726 [2024-11-19 14:16:35.099036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.726 [2024-11-19 14:16:35.099083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.726 [2024-11-19 14:16:35.099090] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:36.726 [2024-11-19 14:16:35.099097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:15:36.726 [2024-11-19 14:16:35.099102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.726 [2024-11-19 14:16:35.099156] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:36.726 [2024-11-19 14:16:35.099163] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:36.726 [2024-11-19 14:16:35.099171] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:36.726 [2024-11-19 14:16:35.099182] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:36.726 [2024-11-19 14:16:35.099189] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:36.726 [2024-11-19 14:16:35.099194] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:36.726 [2024-11-19 14:16:35.099201] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:36.726 [2024-11-19 14:16:35.099207] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:36.726 [2024-11-19 14:16:35.099214] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:36.726 [2024-11-19 14:16:35.099219] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:36.726 [2024-11-19 14:16:35.099237] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:36.726 [2024-11-19 14:16:35.099243] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:36.726 [2024-11-19 14:16:35.099250] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:36.726 [2024-11-19 14:16:35.099255] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:36.726 [2024-11-19 14:16:35.099261] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:15:36.726 [2024-11-19 14:16:35.099266] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:36.726 [2024-11-19 14:16:35.099274] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:36.726 [2024-11-19 14:16:35.099279] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:15:36.726 [2024-11-19 14:16:35.099286] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:36.726 [2024-11-19 14:16:35.099290] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:36.726 [2024-11-19 14:16:35.099297] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:15:36.726 [2024-11-19 14:16:35.099302] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:36.726 [2024-11-19 14:16:35.099309] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:36.726 [2024-11-19 14:16:35.099314] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:36.726 [2024-11-19 14:16:35.099321] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:36.726 [2024-11-19 14:16:35.099326] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:36.726 [2024-11-19 14:16:35.099332] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:15:36.726 [2024-11-19 14:16:35.099337] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:36.726 [2024-11-19 14:16:35.099343] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:36.726 [2024-11-19 14:16:35.099348] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:36.726 [2024-11-19 14:16:35.099353] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:36.726 [2024-11-19 14:16:35.099358] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:36.726 [2024-11-19 14:16:35.099366] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:15:36.726 [2024-11-19 14:16:35.099370] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:36.726 [2024-11-19 14:16:35.099376] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:36.727 [2024-11-19 14:16:35.099381] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:36.727 [2024-11-19 14:16:35.099389] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:36.727 [2024-11-19 14:16:35.099393] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:36.727 [2024-11-19 14:16:35.099399] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:15:36.727 [2024-11-19 14:16:35.099404] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:36.727 [2024-11-19 14:16:35.099410] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:36.727 [2024-11-19 14:16:35.099415] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:36.727 [2024-11-19 14:16:35.099423] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:36.727 [2024-11-19 14:16:35.099430] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:36.727 [2024-11-19 14:16:35.099437] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:36.727 [2024-11-19 14:16:35.099442] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:36.727 [2024-11-19 14:16:35.099448] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:36.727 [2024-11-19 14:16:35.099453] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:36.727 [2024-11-19 14:16:35.099460] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:36.727 [2024-11-19 14:16:35.099465] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:36.727 [2024-11-19 14:16:35.099473] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:36.727 [2024-11-19 14:16:35.099479] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:36.727 [2024-11-19 14:16:35.099488] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:36.727 [2024-11-19 14:16:35.099494] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:15:36.727 [2024-11-19 14:16:35.099501] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:15:36.727 [2024-11-19 14:16:35.099506] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:15:36.727 [2024-11-19 14:16:35.099512] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:15:36.727 [2024-11-19 14:16:35.099517] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:15:36.727 [2024-11-19 14:16:35.099524] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:15:36.727 [2024-11-19 14:16:35.099529] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:15:36.727 [2024-11-19 14:16:35.099536] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:15:36.727 [2024-11-19 14:16:35.099541] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:15:36.727 [2024-11-19 14:16:35.099548] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:15:36.727 [2024-11-19 14:16:35.099553] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:15:36.727 [2024-11-19 14:16:35.099562] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:15:36.727 [2024-11-19 14:16:35.099567] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:36.727 [2024-11-19 14:16:35.099574] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:36.727 [2024-11-19 14:16:35.099580] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:36.727 [2024-11-19 14:16:35.099587] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:36.727 [2024-11-19 14:16:35.099592] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:36.727 [2024-11-19 14:16:35.099599] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:36.727 [2024-11-19 14:16:35.099605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.727 [2024-11-19 14:16:35.099611] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:36.727 [2024-11-19 14:16:35.099617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.484 ms 00:15:36.727 [2024-11-19 14:16:35.099626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.727 [2024-11-19 14:16:35.111725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.727 [2024-11-19 14:16:35.111756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:36.727 [2024-11-19 14:16:35.111764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.064 ms 00:15:36.727 [2024-11-19 14:16:35.111771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.727 [2024-11-19 14:16:35.111836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.727 [2024-11-19 14:16:35.111846] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:36.727 [2024-11-19 14:16:35.111852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:15:36.727 [2024-11-19 14:16:35.111858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.727 [2024-11-19 14:16:35.148405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.727 [2024-11-19 14:16:35.148439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:36.727 [2024-11-19 14:16:35.148449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.494 ms 00:15:36.727 [2024-11-19 14:16:35.148457] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.727 [2024-11-19 14:16:35.148483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.727 [2024-11-19 14:16:35.148493] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:36.727 [2024-11-19 14:16:35.148500] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:15:36.727 [2024-11-19 14:16:35.148509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.727 [2024-11-19 14:16:35.148839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.727 [2024-11-19 14:16:35.148901] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:36.727 [2024-11-19 14:16:35.148909] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:15:36.727 [2024-11-19 14:16:35.148917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.727 [2024-11-19 14:16:35.149003] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.727 [2024-11-19 14:16:35.149018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:36.727 [2024-11-19 14:16:35.149027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:15:36.727 [2024-11-19 14:16:35.149034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.727 [2024-11-19 14:16:35.160500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.727 [2024-11-19 14:16:35.160527] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:36.727 [2024-11-19 14:16:35.160536] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.454 ms 00:15:36.727 [2024-11-19 14:16:35.160543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.727 [2024-11-19 14:16:35.169893] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:15:36.727 [2024-11-19 14:16:35.174288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.727 [2024-11-19 14:16:35.174311] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:36.727 [2024-11-19 14:16:35.174321] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.687 ms 00:15:36.727 [2024-11-19 14:16:35.174328] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.727 [2024-11-19 14:16:35.250045] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.727 [2024-11-19 14:16:35.250081] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:36.727 [2024-11-19 14:16:35.250092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.696 ms 00:15:36.727 [2024-11-19 14:16:35.250098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.727 [2024-11-19 14:16:35.250131] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:15:36.727 [2024-11-19 14:16:35.250140] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:15:40.936 [2024-11-19 14:16:38.840277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.936 [2024-11-19 14:16:38.840324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:40.936 [2024-11-19 14:16:38.840338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3590.129 ms 00:15:40.936 [2024-11-19 14:16:38.840345] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.936 [2024-11-19 14:16:38.840496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.936 [2024-11-19 14:16:38.840505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:40.936 [2024-11-19 14:16:38.840513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:15:40.936 [2024-11-19 14:16:38.840519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.936 [2024-11-19 14:16:38.859767] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.936 [2024-11-19 14:16:38.859798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:40.936 [2024-11-19 14:16:38.859809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.221 ms 00:15:40.936 [2024-11-19 14:16:38.859818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.936 [2024-11-19 14:16:38.877891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.936 [2024-11-19 14:16:38.877916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:40.936 [2024-11-19 14:16:38.877928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.044 ms 00:15:40.936 [2024-11-19 14:16:38.877934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.936 [2024-11-19 14:16:38.878174] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.936 [2024-11-19 14:16:38.878184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:40.936 [2024-11-19 14:16:38.878192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:15:40.936 [2024-11-19 14:16:38.878198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.936 [2024-11-19 14:16:38.930510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.936 [2024-11-19 14:16:38.930536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:40.936 [2024-11-19 14:16:38.930546] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.288 ms 00:15:40.936 [2024-11-19 14:16:38.930552] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.936 [2024-11-19 14:16:38.950063] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.936 [2024-11-19 14:16:38.950088] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:40.936 [2024-11-19 14:16:38.950098] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.479 ms 00:15:40.936 [2024-11-19 14:16:38.950104] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.936 [2024-11-19 14:16:38.951129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.936 [2024-11-19 14:16:38.951155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:40.936 [2024-11-19 14:16:38.951165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.998 ms 00:15:40.936 [2024-11-19 14:16:38.951172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.936 [2024-11-19 14:16:38.970309] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.936 [2024-11-19 14:16:38.970335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:40.936 [2024-11-19 14:16:38.970345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.111 ms 00:15:40.936 [2024-11-19 14:16:38.970350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.936 [2024-11-19 14:16:38.970380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.936 [2024-11-19 14:16:38.970387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:40.936 [2024-11-19 14:16:38.970397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:40.936 [2024-11-19 14:16:38.970403] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.936 [2024-11-19 14:16:38.970462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.936 [2024-11-19 14:16:38.970470] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:40.936 [2024-11-19 14:16:38.970477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:15:40.936 [2024-11-19 14:16:38.970483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.936 [2024-11-19 14:16:38.971206] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3882.042 ms, result 0 00:15:40.936 { 00:15:40.936 "name": "ftl0", 00:15:40.936 "uuid": "47a5a93d-f60f-4563-be6b-84a5c4a6c9fb" 00:15:40.936 } 00:15:40.936 14:16:38 -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:15:40.936 14:16:38 -- ftl/bdevperf.sh@29 -- # jq -r .name 00:15:40.936 14:16:38 -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:15:40.936 14:16:39 -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:15:40.936 [2024-11-19 14:16:39.259363] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:40.936 I/O size of 69632 is greater than zero copy threshold (65536). 00:15:40.936 Zero copy mechanism will not be used. 00:15:40.936 Running I/O for 4 seconds... 00:15:45.148 00:15:45.149 Latency(us) 00:15:45.149 [2024-11-19T14:16:43.711Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.149 [2024-11-19T14:16:43.711Z] Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:15:45.149 ftl0 : 4.00 832.35 55.27 0.00 0.00 1275.06 209.53 1714.02 00:15:45.149 [2024-11-19T14:16:43.711Z] =================================================================================================================== 00:15:45.149 [2024-11-19T14:16:43.711Z] Total : 832.35 55.27 0.00 0.00 1275.06 209.53 1714.02 00:15:45.149 [2024-11-19 14:16:43.266111] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:15:45.149 0 00:15:45.149 14:16:43 -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:15:45.149 [2024-11-19 14:16:43.370667] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:45.149 Running I/O for 4 seconds... 00:15:49.354 00:15:49.355 Latency(us) 00:15:49.355 [2024-11-19T14:16:47.917Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:49.355 [2024-11-19T14:16:47.917Z] Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:15:49.355 ftl0 : 4.03 5563.75 21.73 0.00 0.00 22911.93 322.95 50412.31 00:15:49.355 [2024-11-19T14:16:47.917Z] =================================================================================================================== 00:15:49.355 [2024-11-19T14:16:47.917Z] Total : 5563.75 21.73 0.00 0.00 22911.93 0.00 50412.31 00:15:49.355 [2024-11-19 14:16:47.411827] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:15:49.355 0 00:15:49.355 14:16:47 -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:15:49.355 [2024-11-19 14:16:47.522430] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:49.355 Running I/O for 4 seconds... 00:15:53.573 00:15:53.573 Latency(us) 00:15:53.573 [2024-11-19T14:16:52.135Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.573 [2024-11-19T14:16:52.135Z] Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:53.573 Verification LBA range: start 0x0 length 0x1400000 00:15:53.573 ftl0 : 4.01 9465.90 36.98 0.00 0.00 13490.12 178.81 70173.93 00:15:53.573 [2024-11-19T14:16:52.135Z] =================================================================================================================== 00:15:53.573 [2024-11-19T14:16:52.135Z] Total : 9465.90 36.98 0.00 0.00 13490.12 0.00 70173.93 00:15:53.573 0 00:15:53.573 [2024-11-19 14:16:51.545910] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:15:53.573 14:16:51 -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:15:53.573 [2024-11-19 14:16:51.744990] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.573 [2024-11-19 14:16:51.745045] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:53.573 [2024-11-19 14:16:51.745062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:53.573 [2024-11-19 14:16:51.745071] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.573 [2024-11-19 14:16:51.745098] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:53.573 [2024-11-19 14:16:51.748097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.573 [2024-11-19 14:16:51.748359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:53.573 [2024-11-19 14:16:51.748382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.985 ms 00:15:53.573 [2024-11-19 14:16:51.748397] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.573 [2024-11-19 14:16:51.751587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.573 [2024-11-19 14:16:51.751770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:53.573 [2024-11-19 14:16:51.751791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.157 ms 00:15:53.573 [2024-11-19 14:16:51.751802] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.573 [2024-11-19 14:16:51.954804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.573 [2024-11-19 14:16:51.955033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:53.573 [2024-11-19 14:16:51.955061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 202.977 ms 00:15:53.573 [2024-11-19 14:16:51.955072] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.573 [2024-11-19 14:16:51.961254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.573 [2024-11-19 14:16:51.961302] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:15:53.573 [2024-11-19 14:16:51.961314] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.123 ms 00:15:53.573 [2024-11-19 14:16:51.961324] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.573 [2024-11-19 14:16:51.988419] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.573 [2024-11-19 14:16:51.988471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:53.573 [2024-11-19 14:16:51.988484] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.016 ms 00:15:53.573 [2024-11-19 14:16:51.988498] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.573 [2024-11-19 14:16:52.006331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.573 [2024-11-19 14:16:52.006533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:53.573 [2024-11-19 14:16:52.006555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.786 ms 00:15:53.573 [2024-11-19 14:16:52.006567] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.573 [2024-11-19 14:16:52.006723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.573 [2024-11-19 14:16:52.006740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:53.573 [2024-11-19 14:16:52.006750] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:15:53.573 [2024-11-19 14:16:52.006760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.573 [2024-11-19 14:16:52.033041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.573 [2024-11-19 14:16:52.033217] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:15:53.573 [2024-11-19 14:16:52.033237] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.265 ms 00:15:53.573 [2024-11-19 14:16:52.033247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.573 [2024-11-19 14:16:52.058599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.573 [2024-11-19 14:16:52.058648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:15:53.573 [2024-11-19 14:16:52.058659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.279 ms 00:15:53.573 [2024-11-19 14:16:52.058672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.573 [2024-11-19 14:16:52.083512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.573 [2024-11-19 14:16:52.083561] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:53.573 [2024-11-19 14:16:52.083572] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.797 ms 00:15:53.573 [2024-11-19 14:16:52.083582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.573 [2024-11-19 14:16:52.108481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.573 [2024-11-19 14:16:52.108529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:53.573 [2024-11-19 14:16:52.108540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.816 ms 00:15:53.573 [2024-11-19 14:16:52.108550] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.573 [2024-11-19 14:16:52.108593] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:53.573 [2024-11-19 14:16:52.108612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.108623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.108634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.108642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.108653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.108661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.108675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.108683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.108693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.108700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.108712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.108720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.108730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.108739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.108749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.108756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.108767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.108775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.108784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.108792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.108802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.108810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.108823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.108831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.108841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.108848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.108859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.108866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.108900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.108911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.108922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.108931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.108941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.108948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.108958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.108966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.108976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.108984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.108996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:53.573 [2024-11-19 14:16:52.109635] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:53.573 [2024-11-19 14:16:52.109645] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 47a5a93d-f60f-4563-be6b-84a5c4a6c9fb 00:15:53.574 [2024-11-19 14:16:52.109658] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:53.574 [2024-11-19 14:16:52.109666] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:53.574 [2024-11-19 14:16:52.109676] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:53.574 [2024-11-19 14:16:52.109684] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:53.574 [2024-11-19 14:16:52.109694] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:53.574 [2024-11-19 14:16:52.109704] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:53.574 [2024-11-19 14:16:52.109715] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:53.574 [2024-11-19 14:16:52.109722] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:53.574 [2024-11-19 14:16:52.109730] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:53.574 [2024-11-19 14:16:52.109737] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.574 [2024-11-19 14:16:52.109747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:53.574 [2024-11-19 14:16:52.109756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.146 ms 00:15:53.574 [2024-11-19 14:16:52.109765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.574 [2024-11-19 14:16:52.123270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.574 [2024-11-19 14:16:52.123435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:53.574 [2024-11-19 14:16:52.123452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.470 ms 00:15:53.574 [2024-11-19 14:16:52.123468] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.574 [2024-11-19 14:16:52.123696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.574 [2024-11-19 14:16:52.123710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:53.574 [2024-11-19 14:16:52.123719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:15:53.574 [2024-11-19 14:16:52.123728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.835 [2024-11-19 14:16:52.165267] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:53.835 [2024-11-19 14:16:52.165316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:53.835 [2024-11-19 14:16:52.165331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:53.835 [2024-11-19 14:16:52.165342] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.835 [2024-11-19 14:16:52.165408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:53.835 [2024-11-19 14:16:52.165419] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:53.835 [2024-11-19 14:16:52.165428] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:53.835 [2024-11-19 14:16:52.165437] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.835 [2024-11-19 14:16:52.165508] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:53.835 [2024-11-19 14:16:52.165523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:53.835 [2024-11-19 14:16:52.165531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:53.835 [2024-11-19 14:16:52.165547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.835 [2024-11-19 14:16:52.165563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:53.835 [2024-11-19 14:16:52.165573] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:53.835 [2024-11-19 14:16:52.165581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:53.835 [2024-11-19 14:16:52.165592] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.835 [2024-11-19 14:16:52.246567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:53.835 [2024-11-19 14:16:52.246623] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:53.835 [2024-11-19 14:16:52.246635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:53.835 [2024-11-19 14:16:52.246648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.835 [2024-11-19 14:16:52.279371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:53.835 [2024-11-19 14:16:52.279423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:53.835 [2024-11-19 14:16:52.279434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:53.835 [2024-11-19 14:16:52.279445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.835 [2024-11-19 14:16:52.279512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:53.835 [2024-11-19 14:16:52.279525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:53.835 [2024-11-19 14:16:52.279534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:53.835 [2024-11-19 14:16:52.279548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.835 [2024-11-19 14:16:52.279594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:53.835 [2024-11-19 14:16:52.279607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:53.835 [2024-11-19 14:16:52.279616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:53.835 [2024-11-19 14:16:52.279627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.835 [2024-11-19 14:16:52.279724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:53.835 [2024-11-19 14:16:52.279737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:53.835 [2024-11-19 14:16:52.279747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:53.835 [2024-11-19 14:16:52.279758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.835 [2024-11-19 14:16:52.279795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:53.835 [2024-11-19 14:16:52.279810] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:53.835 [2024-11-19 14:16:52.279818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:53.835 [2024-11-19 14:16:52.279828] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.835 [2024-11-19 14:16:52.279869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:53.835 [2024-11-19 14:16:52.279923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:53.835 [2024-11-19 14:16:52.279931] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:53.835 [2024-11-19 14:16:52.279944] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.835 [2024-11-19 14:16:52.279998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:53.835 [2024-11-19 14:16:52.280013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:53.835 [2024-11-19 14:16:52.280022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:53.835 [2024-11-19 14:16:52.280033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.835 [2024-11-19 14:16:52.280180] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 535.143 ms, result 0 00:15:53.835 true 00:15:53.835 14:16:52 -- ftl/bdevperf.sh@37 -- # killprocess 71606 00:15:53.835 14:16:52 -- common/autotest_common.sh@936 -- # '[' -z 71606 ']' 00:15:53.835 14:16:52 -- common/autotest_common.sh@940 -- # kill -0 71606 00:15:53.835 14:16:52 -- common/autotest_common.sh@941 -- # uname 00:15:53.835 14:16:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:53.835 14:16:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71606 00:15:53.835 killing process with pid 71606 00:15:53.835 Received shutdown signal, test time was about 4.000000 seconds 00:15:53.835 00:15:53.835 Latency(us) 00:15:53.835 [2024-11-19T14:16:52.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.835 [2024-11-19T14:16:52.397Z] =================================================================================================================== 00:15:53.835 [2024-11-19T14:16:52.397Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:53.835 14:16:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:53.835 14:16:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:53.835 14:16:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71606' 00:15:53.835 14:16:52 -- common/autotest_common.sh@955 -- # kill 71606 00:15:53.835 14:16:52 -- common/autotest_common.sh@960 -- # wait 71606 00:15:54.406 14:16:52 -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:15:54.406 14:16:52 -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:15:54.406 14:16:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:54.406 14:16:52 -- common/autotest_common.sh@10 -- # set +x 00:15:54.666 Remove shared memory files 00:15:54.666 14:16:53 -- ftl/bdevperf.sh@41 -- # remove_shm 00:15:54.666 14:16:53 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:15:54.666 14:16:53 -- ftl/common.sh@205 -- # rm -f rm -f 00:15:54.666 14:16:53 -- ftl/common.sh@206 -- # rm -f rm -f 00:15:54.666 14:16:53 -- ftl/common.sh@207 -- # rm -f rm -f 00:15:54.666 14:16:53 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:15:54.666 14:16:53 -- ftl/common.sh@209 -- # rm -f rm -f 00:15:54.666 ************************************ 00:15:54.666 END TEST ftl_bdevperf 00:15:54.666 ************************************ 00:15:54.666 00:15:54.666 real 0m21.797s 00:15:54.666 user 0m24.192s 00:15:54.666 sys 0m0.996s 00:15:54.666 14:16:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:54.667 14:16:53 -- common/autotest_common.sh@10 -- # set +x 00:15:54.667 14:16:53 -- ftl/ftl.sh@76 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:15:54.667 14:16:53 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:54.667 14:16:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:54.667 14:16:53 -- common/autotest_common.sh@10 -- # set +x 00:15:54.667 ************************************ 00:15:54.667 START TEST ftl_trim 00:15:54.667 ************************************ 00:15:54.667 14:16:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:15:54.667 * Looking for test storage... 00:15:54.667 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:54.667 14:16:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:54.667 14:16:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:54.667 14:16:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:54.667 14:16:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:54.667 14:16:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:54.667 14:16:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:54.667 14:16:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:54.667 14:16:53 -- scripts/common.sh@335 -- # IFS=.-: 00:15:54.667 14:16:53 -- scripts/common.sh@335 -- # read -ra ver1 00:15:54.667 14:16:53 -- scripts/common.sh@336 -- # IFS=.-: 00:15:54.667 14:16:53 -- scripts/common.sh@336 -- # read -ra ver2 00:15:54.667 14:16:53 -- scripts/common.sh@337 -- # local 'op=<' 00:15:54.667 14:16:53 -- scripts/common.sh@339 -- # ver1_l=2 00:15:54.667 14:16:53 -- scripts/common.sh@340 -- # ver2_l=1 00:15:54.667 14:16:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:54.667 14:16:53 -- scripts/common.sh@343 -- # case "$op" in 00:15:54.667 14:16:53 -- scripts/common.sh@344 -- # : 1 00:15:54.667 14:16:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:54.667 14:16:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:54.667 14:16:53 -- scripts/common.sh@364 -- # decimal 1 00:15:54.667 14:16:53 -- scripts/common.sh@352 -- # local d=1 00:15:54.667 14:16:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:54.667 14:16:53 -- scripts/common.sh@354 -- # echo 1 00:15:54.667 14:16:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:54.667 14:16:53 -- scripts/common.sh@365 -- # decimal 2 00:15:54.667 14:16:53 -- scripts/common.sh@352 -- # local d=2 00:15:54.667 14:16:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:54.667 14:16:53 -- scripts/common.sh@354 -- # echo 2 00:15:54.667 14:16:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:54.667 14:16:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:54.667 14:16:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:54.667 14:16:53 -- scripts/common.sh@367 -- # return 0 00:15:54.667 14:16:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:54.667 14:16:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:54.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.667 --rc genhtml_branch_coverage=1 00:15:54.667 --rc genhtml_function_coverage=1 00:15:54.667 --rc genhtml_legend=1 00:15:54.667 --rc geninfo_all_blocks=1 00:15:54.667 --rc geninfo_unexecuted_blocks=1 00:15:54.667 00:15:54.667 ' 00:15:54.667 14:16:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:54.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.667 --rc genhtml_branch_coverage=1 00:15:54.667 --rc genhtml_function_coverage=1 00:15:54.667 --rc genhtml_legend=1 00:15:54.667 --rc geninfo_all_blocks=1 00:15:54.667 --rc geninfo_unexecuted_blocks=1 00:15:54.667 00:15:54.667 ' 00:15:54.667 14:16:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:54.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.667 --rc genhtml_branch_coverage=1 00:15:54.667 --rc genhtml_function_coverage=1 00:15:54.667 --rc genhtml_legend=1 00:15:54.667 --rc geninfo_all_blocks=1 00:15:54.667 --rc geninfo_unexecuted_blocks=1 00:15:54.667 00:15:54.667 ' 00:15:54.667 14:16:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:54.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.667 --rc genhtml_branch_coverage=1 00:15:54.667 --rc genhtml_function_coverage=1 00:15:54.667 --rc genhtml_legend=1 00:15:54.667 --rc geninfo_all_blocks=1 00:15:54.667 --rc geninfo_unexecuted_blocks=1 00:15:54.667 00:15:54.667 ' 00:15:54.667 14:16:53 -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:54.667 14:16:53 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:15:54.927 14:16:53 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:54.927 14:16:53 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:54.927 14:16:53 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:54.927 14:16:53 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:54.927 14:16:53 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:54.927 14:16:53 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:54.927 14:16:53 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:54.927 14:16:53 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:54.927 14:16:53 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:54.927 14:16:53 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:54.927 14:16:53 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:54.927 14:16:53 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:54.927 14:16:53 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:54.927 14:16:53 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:54.927 14:16:53 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:54.927 14:16:53 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:54.928 14:16:53 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:54.928 14:16:53 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:54.928 14:16:53 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:54.928 14:16:53 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:54.928 14:16:53 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:54.928 14:16:53 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:54.928 14:16:53 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:54.928 14:16:53 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:54.928 14:16:53 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:54.928 14:16:53 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:54.928 14:16:53 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:54.928 14:16:53 -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:54.928 14:16:53 -- ftl/trim.sh@23 -- # device=0000:00:07.0 00:15:54.928 14:16:53 -- ftl/trim.sh@24 -- # cache_device=0000:00:06.0 00:15:54.928 14:16:53 -- ftl/trim.sh@25 -- # timeout=240 00:15:54.928 14:16:53 -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:15:54.928 14:16:53 -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:15:54.928 14:16:53 -- ftl/trim.sh@29 -- # [[ y != y ]] 00:15:54.928 14:16:53 -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:15:54.928 14:16:53 -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:15:54.928 14:16:53 -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:54.928 14:16:53 -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:54.928 14:16:53 -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:54.928 14:16:53 -- ftl/trim.sh@40 -- # svcpid=71967 00:15:54.928 14:16:53 -- ftl/trim.sh@41 -- # waitforlisten 71967 00:15:54.928 14:16:53 -- common/autotest_common.sh@829 -- # '[' -z 71967 ']' 00:15:54.928 14:16:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.928 14:16:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:54.928 14:16:53 -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:15:54.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.928 14:16:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.928 14:16:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:54.928 14:16:53 -- common/autotest_common.sh@10 -- # set +x 00:15:54.928 [2024-11-19 14:16:53.302974] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:54.928 [2024-11-19 14:16:53.303189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71967 ] 00:15:54.928 [2024-11-19 14:16:53.443008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:55.187 [2024-11-19 14:16:53.588314] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:55.188 [2024-11-19 14:16:53.588799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.188 [2024-11-19 14:16:53.589066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.188 [2024-11-19 14:16:53.589094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:55.758 14:16:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:55.758 14:16:54 -- common/autotest_common.sh@862 -- # return 0 00:15:55.758 14:16:54 -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:15:55.758 14:16:54 -- ftl/common.sh@54 -- # local name=nvme0 00:15:55.758 14:16:54 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:15:55.758 14:16:54 -- ftl/common.sh@56 -- # local size=103424 00:15:55.758 14:16:54 -- ftl/common.sh@59 -- # local base_bdev 00:15:55.758 14:16:54 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:15:56.020 14:16:54 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:56.020 14:16:54 -- ftl/common.sh@62 -- # local base_size 00:15:56.020 14:16:54 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:56.020 14:16:54 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:15:56.020 14:16:54 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:56.020 14:16:54 -- common/autotest_common.sh@1369 -- # local bs 00:15:56.020 14:16:54 -- common/autotest_common.sh@1370 -- # local nb 00:15:56.020 14:16:54 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:56.020 14:16:54 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:56.020 { 00:15:56.020 "name": "nvme0n1", 00:15:56.020 "aliases": [ 00:15:56.020 "cb211df0-a7dc-4a67-8467-1b12f2819062" 00:15:56.020 ], 00:15:56.020 "product_name": "NVMe disk", 00:15:56.020 "block_size": 4096, 00:15:56.020 "num_blocks": 1310720, 00:15:56.020 "uuid": "cb211df0-a7dc-4a67-8467-1b12f2819062", 00:15:56.020 "assigned_rate_limits": { 00:15:56.020 "rw_ios_per_sec": 0, 00:15:56.020 "rw_mbytes_per_sec": 0, 00:15:56.020 "r_mbytes_per_sec": 0, 00:15:56.020 "w_mbytes_per_sec": 0 00:15:56.020 }, 00:15:56.020 "claimed": true, 00:15:56.020 "claim_type": "read_many_write_one", 00:15:56.020 "zoned": false, 00:15:56.020 "supported_io_types": { 00:15:56.020 "read": true, 00:15:56.020 "write": true, 00:15:56.020 "unmap": true, 00:15:56.020 "write_zeroes": true, 00:15:56.020 "flush": true, 00:15:56.020 "reset": true, 00:15:56.020 "compare": true, 00:15:56.020 "compare_and_write": false, 00:15:56.020 "abort": true, 00:15:56.020 "nvme_admin": true, 00:15:56.020 "nvme_io": true 00:15:56.020 }, 00:15:56.020 "driver_specific": { 00:15:56.020 "nvme": [ 00:15:56.020 { 00:15:56.020 "pci_address": "0000:00:07.0", 00:15:56.020 "trid": { 00:15:56.020 "trtype": "PCIe", 00:15:56.020 "traddr": "0000:00:07.0" 00:15:56.020 }, 00:15:56.020 "ctrlr_data": { 00:15:56.020 "cntlid": 0, 00:15:56.020 "vendor_id": "0x1b36", 00:15:56.020 "model_number": "QEMU NVMe Ctrl", 00:15:56.020 "serial_number": "12341", 00:15:56.020 "firmware_revision": "8.0.0", 00:15:56.020 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:56.020 "oacs": { 00:15:56.020 "security": 0, 00:15:56.020 "format": 1, 00:15:56.020 "firmware": 0, 00:15:56.020 "ns_manage": 1 00:15:56.020 }, 00:15:56.020 "multi_ctrlr": false, 00:15:56.020 "ana_reporting": false 00:15:56.020 }, 00:15:56.020 "vs": { 00:15:56.020 "nvme_version": "1.4" 00:15:56.020 }, 00:15:56.020 "ns_data": { 00:15:56.020 "id": 1, 00:15:56.020 "can_share": false 00:15:56.020 } 00:15:56.020 } 00:15:56.020 ], 00:15:56.020 "mp_policy": "active_passive" 00:15:56.020 } 00:15:56.020 } 00:15:56.020 ]' 00:15:56.020 14:16:54 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:56.281 14:16:54 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:56.281 14:16:54 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:56.281 14:16:54 -- common/autotest_common.sh@1373 -- # nb=1310720 00:15:56.281 14:16:54 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:15:56.281 14:16:54 -- common/autotest_common.sh@1377 -- # echo 5120 00:15:56.281 14:16:54 -- ftl/common.sh@63 -- # base_size=5120 00:15:56.281 14:16:54 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:56.281 14:16:54 -- ftl/common.sh@67 -- # clear_lvols 00:15:56.281 14:16:54 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:56.281 14:16:54 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:56.281 14:16:54 -- ftl/common.sh@28 -- # stores=720097cd-1e2e-4936-9815-4978edb5f591 00:15:56.281 14:16:54 -- ftl/common.sh@29 -- # for lvs in $stores 00:15:56.281 14:16:54 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 720097cd-1e2e-4936-9815-4978edb5f591 00:15:56.542 14:16:55 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:56.802 14:16:55 -- ftl/common.sh@68 -- # lvs=8ab1ca4f-d2a0-43fb-902c-94f1bf013f3b 00:15:56.802 14:16:55 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8ab1ca4f-d2a0-43fb-902c-94f1bf013f3b 00:15:57.064 14:16:55 -- ftl/trim.sh@43 -- # split_bdev=69fe9145-3dc3-46a2-9b6a-1f7b129dc816 00:15:57.064 14:16:55 -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:06.0 69fe9145-3dc3-46a2-9b6a-1f7b129dc816 00:15:57.065 14:16:55 -- ftl/common.sh@35 -- # local name=nvc0 00:15:57.065 14:16:55 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:15:57.065 14:16:55 -- ftl/common.sh@37 -- # local base_bdev=69fe9145-3dc3-46a2-9b6a-1f7b129dc816 00:15:57.065 14:16:55 -- ftl/common.sh@38 -- # local cache_size= 00:15:57.065 14:16:55 -- ftl/common.sh@41 -- # get_bdev_size 69fe9145-3dc3-46a2-9b6a-1f7b129dc816 00:15:57.065 14:16:55 -- common/autotest_common.sh@1367 -- # local bdev_name=69fe9145-3dc3-46a2-9b6a-1f7b129dc816 00:15:57.065 14:16:55 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:57.065 14:16:55 -- common/autotest_common.sh@1369 -- # local bs 00:15:57.065 14:16:55 -- common/autotest_common.sh@1370 -- # local nb 00:15:57.065 14:16:55 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 69fe9145-3dc3-46a2-9b6a-1f7b129dc816 00:15:57.065 14:16:55 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:57.065 { 00:15:57.065 "name": "69fe9145-3dc3-46a2-9b6a-1f7b129dc816", 00:15:57.065 "aliases": [ 00:15:57.065 "lvs/nvme0n1p0" 00:15:57.065 ], 00:15:57.065 "product_name": "Logical Volume", 00:15:57.065 "block_size": 4096, 00:15:57.065 "num_blocks": 26476544, 00:15:57.065 "uuid": "69fe9145-3dc3-46a2-9b6a-1f7b129dc816", 00:15:57.065 "assigned_rate_limits": { 00:15:57.065 "rw_ios_per_sec": 0, 00:15:57.065 "rw_mbytes_per_sec": 0, 00:15:57.065 "r_mbytes_per_sec": 0, 00:15:57.065 "w_mbytes_per_sec": 0 00:15:57.065 }, 00:15:57.065 "claimed": false, 00:15:57.065 "zoned": false, 00:15:57.065 "supported_io_types": { 00:15:57.065 "read": true, 00:15:57.065 "write": true, 00:15:57.065 "unmap": true, 00:15:57.065 "write_zeroes": true, 00:15:57.065 "flush": false, 00:15:57.065 "reset": true, 00:15:57.065 "compare": false, 00:15:57.065 "compare_and_write": false, 00:15:57.065 "abort": false, 00:15:57.065 "nvme_admin": false, 00:15:57.065 "nvme_io": false 00:15:57.065 }, 00:15:57.065 "driver_specific": { 00:15:57.065 "lvol": { 00:15:57.065 "lvol_store_uuid": "8ab1ca4f-d2a0-43fb-902c-94f1bf013f3b", 00:15:57.065 "base_bdev": "nvme0n1", 00:15:57.065 "thin_provision": true, 00:15:57.065 "snapshot": false, 00:15:57.065 "clone": false, 00:15:57.065 "esnap_clone": false 00:15:57.065 } 00:15:57.065 } 00:15:57.065 } 00:15:57.065 ]' 00:15:57.065 14:16:55 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:57.326 14:16:55 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:57.326 14:16:55 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:57.326 14:16:55 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:57.326 14:16:55 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:57.326 14:16:55 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:57.326 14:16:55 -- ftl/common.sh@41 -- # local base_size=5171 00:15:57.326 14:16:55 -- ftl/common.sh@44 -- # local nvc_bdev 00:15:57.326 14:16:55 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:15:57.586 14:16:55 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:57.586 14:16:55 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:57.586 14:16:55 -- ftl/common.sh@48 -- # get_bdev_size 69fe9145-3dc3-46a2-9b6a-1f7b129dc816 00:15:57.586 14:16:55 -- common/autotest_common.sh@1367 -- # local bdev_name=69fe9145-3dc3-46a2-9b6a-1f7b129dc816 00:15:57.586 14:16:55 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:57.587 14:16:55 -- common/autotest_common.sh@1369 -- # local bs 00:15:57.587 14:16:55 -- common/autotest_common.sh@1370 -- # local nb 00:15:57.587 14:16:55 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 69fe9145-3dc3-46a2-9b6a-1f7b129dc816 00:15:57.587 14:16:56 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:57.587 { 00:15:57.587 "name": "69fe9145-3dc3-46a2-9b6a-1f7b129dc816", 00:15:57.587 "aliases": [ 00:15:57.587 "lvs/nvme0n1p0" 00:15:57.587 ], 00:15:57.587 "product_name": "Logical Volume", 00:15:57.587 "block_size": 4096, 00:15:57.587 "num_blocks": 26476544, 00:15:57.587 "uuid": "69fe9145-3dc3-46a2-9b6a-1f7b129dc816", 00:15:57.587 "assigned_rate_limits": { 00:15:57.587 "rw_ios_per_sec": 0, 00:15:57.587 "rw_mbytes_per_sec": 0, 00:15:57.587 "r_mbytes_per_sec": 0, 00:15:57.587 "w_mbytes_per_sec": 0 00:15:57.587 }, 00:15:57.587 "claimed": false, 00:15:57.587 "zoned": false, 00:15:57.587 "supported_io_types": { 00:15:57.587 "read": true, 00:15:57.587 "write": true, 00:15:57.587 "unmap": true, 00:15:57.587 "write_zeroes": true, 00:15:57.587 "flush": false, 00:15:57.587 "reset": true, 00:15:57.587 "compare": false, 00:15:57.587 "compare_and_write": false, 00:15:57.587 "abort": false, 00:15:57.587 "nvme_admin": false, 00:15:57.587 "nvme_io": false 00:15:57.587 }, 00:15:57.587 "driver_specific": { 00:15:57.587 "lvol": { 00:15:57.587 "lvol_store_uuid": "8ab1ca4f-d2a0-43fb-902c-94f1bf013f3b", 00:15:57.587 "base_bdev": "nvme0n1", 00:15:57.587 "thin_provision": true, 00:15:57.587 "snapshot": false, 00:15:57.587 "clone": false, 00:15:57.587 "esnap_clone": false 00:15:57.587 } 00:15:57.587 } 00:15:57.587 } 00:15:57.587 ]' 00:15:57.587 14:16:56 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:57.587 14:16:56 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:57.587 14:16:56 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:57.847 14:16:56 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:57.847 14:16:56 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:57.847 14:16:56 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:57.847 14:16:56 -- ftl/common.sh@48 -- # cache_size=5171 00:15:57.847 14:16:56 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:57.847 14:16:56 -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:15:57.847 14:16:56 -- ftl/trim.sh@46 -- # l2p_percentage=60 00:15:57.847 14:16:56 -- ftl/trim.sh@47 -- # get_bdev_size 69fe9145-3dc3-46a2-9b6a-1f7b129dc816 00:15:57.847 14:16:56 -- common/autotest_common.sh@1367 -- # local bdev_name=69fe9145-3dc3-46a2-9b6a-1f7b129dc816 00:15:57.847 14:16:56 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:57.847 14:16:56 -- common/autotest_common.sh@1369 -- # local bs 00:15:57.847 14:16:56 -- common/autotest_common.sh@1370 -- # local nb 00:15:57.847 14:16:56 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 69fe9145-3dc3-46a2-9b6a-1f7b129dc816 00:15:58.108 14:16:56 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:58.108 { 00:15:58.108 "name": "69fe9145-3dc3-46a2-9b6a-1f7b129dc816", 00:15:58.108 "aliases": [ 00:15:58.108 "lvs/nvme0n1p0" 00:15:58.108 ], 00:15:58.108 "product_name": "Logical Volume", 00:15:58.108 "block_size": 4096, 00:15:58.108 "num_blocks": 26476544, 00:15:58.108 "uuid": "69fe9145-3dc3-46a2-9b6a-1f7b129dc816", 00:15:58.108 "assigned_rate_limits": { 00:15:58.108 "rw_ios_per_sec": 0, 00:15:58.108 "rw_mbytes_per_sec": 0, 00:15:58.108 "r_mbytes_per_sec": 0, 00:15:58.108 "w_mbytes_per_sec": 0 00:15:58.108 }, 00:15:58.108 "claimed": false, 00:15:58.108 "zoned": false, 00:15:58.108 "supported_io_types": { 00:15:58.108 "read": true, 00:15:58.108 "write": true, 00:15:58.108 "unmap": true, 00:15:58.108 "write_zeroes": true, 00:15:58.108 "flush": false, 00:15:58.108 "reset": true, 00:15:58.108 "compare": false, 00:15:58.108 "compare_and_write": false, 00:15:58.108 "abort": false, 00:15:58.108 "nvme_admin": false, 00:15:58.108 "nvme_io": false 00:15:58.108 }, 00:15:58.108 "driver_specific": { 00:15:58.108 "lvol": { 00:15:58.108 "lvol_store_uuid": "8ab1ca4f-d2a0-43fb-902c-94f1bf013f3b", 00:15:58.108 "base_bdev": "nvme0n1", 00:15:58.108 "thin_provision": true, 00:15:58.108 "snapshot": false, 00:15:58.108 "clone": false, 00:15:58.108 "esnap_clone": false 00:15:58.108 } 00:15:58.108 } 00:15:58.108 } 00:15:58.108 ]' 00:15:58.108 14:16:56 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:58.108 14:16:56 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:58.108 14:16:56 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:58.108 14:16:56 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:58.108 14:16:56 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:58.108 14:16:56 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:58.108 14:16:56 -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:15:58.108 14:16:56 -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 69fe9145-3dc3-46a2-9b6a-1f7b129dc816 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:15:58.370 [2024-11-19 14:16:56.780832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.370 [2024-11-19 14:16:56.780961] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:58.370 [2024-11-19 14:16:56.780981] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:58.370 [2024-11-19 14:16:56.780988] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.370 [2024-11-19 14:16:56.783106] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.370 [2024-11-19 14:16:56.783132] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:58.370 [2024-11-19 14:16:56.783141] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.094 ms 00:15:58.370 [2024-11-19 14:16:56.783148] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.370 [2024-11-19 14:16:56.783210] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:58.370 [2024-11-19 14:16:56.783762] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:58.370 [2024-11-19 14:16:56.783784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.370 [2024-11-19 14:16:56.783790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:58.370 [2024-11-19 14:16:56.783799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.576 ms 00:15:58.370 [2024-11-19 14:16:56.783805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.370 [2024-11-19 14:16:56.783870] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 3df0115d-4eed-4c52-9819-8d435bdfff0b 00:15:58.370 [2024-11-19 14:16:56.784767] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.370 [2024-11-19 14:16:56.784976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:58.370 [2024-11-19 14:16:56.784990] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:15:58.370 [2024-11-19 14:16:56.784998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.370 [2024-11-19 14:16:56.789745] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.370 [2024-11-19 14:16:56.789771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:58.370 [2024-11-19 14:16:56.789779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.665 ms 00:15:58.371 [2024-11-19 14:16:56.789786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.371 [2024-11-19 14:16:56.789888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.371 [2024-11-19 14:16:56.789898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:58.371 [2024-11-19 14:16:56.789905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:15:58.371 [2024-11-19 14:16:56.789914] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.371 [2024-11-19 14:16:56.789944] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.371 [2024-11-19 14:16:56.789952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:58.371 [2024-11-19 14:16:56.789958] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:58.371 [2024-11-19 14:16:56.789965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.371 [2024-11-19 14:16:56.789993] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:15:58.371 [2024-11-19 14:16:56.792890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.371 [2024-11-19 14:16:56.792912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:58.371 [2024-11-19 14:16:56.792922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.900 ms 00:15:58.371 [2024-11-19 14:16:56.792928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.371 [2024-11-19 14:16:56.792980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.371 [2024-11-19 14:16:56.792987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:58.371 [2024-11-19 14:16:56.792996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:58.371 [2024-11-19 14:16:56.793001] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.371 [2024-11-19 14:16:56.793027] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:58.371 [2024-11-19 14:16:56.793110] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:58.371 [2024-11-19 14:16:56.793123] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:58.371 [2024-11-19 14:16:56.793133] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:58.371 [2024-11-19 14:16:56.793142] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:58.371 [2024-11-19 14:16:56.793149] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:58.371 [2024-11-19 14:16:56.793158] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:15:58.371 [2024-11-19 14:16:56.793164] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:58.371 [2024-11-19 14:16:56.793172] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:58.371 [2024-11-19 14:16:56.793179] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:58.371 [2024-11-19 14:16:56.793186] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.371 [2024-11-19 14:16:56.793191] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:58.371 [2024-11-19 14:16:56.793199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:15:58.371 [2024-11-19 14:16:56.793204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.371 [2024-11-19 14:16:56.793263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.371 [2024-11-19 14:16:56.793270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:58.371 [2024-11-19 14:16:56.793279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:15:58.371 [2024-11-19 14:16:56.793284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.371 [2024-11-19 14:16:56.793362] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:58.371 [2024-11-19 14:16:56.793370] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:58.371 [2024-11-19 14:16:56.793377] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:58.371 [2024-11-19 14:16:56.793384] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:58.371 [2024-11-19 14:16:56.793390] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:58.371 [2024-11-19 14:16:56.793395] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:58.371 [2024-11-19 14:16:56.793420] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:15:58.371 [2024-11-19 14:16:56.793427] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:58.371 [2024-11-19 14:16:56.793433] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:15:58.371 [2024-11-19 14:16:56.793439] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:58.371 [2024-11-19 14:16:56.793445] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:58.371 [2024-11-19 14:16:56.793452] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:15:58.371 [2024-11-19 14:16:56.793459] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:58.371 [2024-11-19 14:16:56.793464] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:58.371 [2024-11-19 14:16:56.793472] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:15:58.371 [2024-11-19 14:16:56.793478] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:58.371 [2024-11-19 14:16:56.793486] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:58.371 [2024-11-19 14:16:56.793491] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:15:58.371 [2024-11-19 14:16:56.793498] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:58.371 [2024-11-19 14:16:56.793505] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:58.371 [2024-11-19 14:16:56.793512] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:15:58.371 [2024-11-19 14:16:56.793518] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:58.371 [2024-11-19 14:16:56.793525] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:58.371 [2024-11-19 14:16:56.793530] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:15:58.371 [2024-11-19 14:16:56.793536] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:58.371 [2024-11-19 14:16:56.793541] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:58.371 [2024-11-19 14:16:56.793548] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:15:58.371 [2024-11-19 14:16:56.793553] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:58.371 [2024-11-19 14:16:56.793560] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:58.371 [2024-11-19 14:16:56.793565] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:15:58.371 [2024-11-19 14:16:56.793571] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:58.371 [2024-11-19 14:16:56.793576] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:58.371 [2024-11-19 14:16:56.793584] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:15:58.371 [2024-11-19 14:16:56.793590] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:58.371 [2024-11-19 14:16:56.793596] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:58.371 [2024-11-19 14:16:56.793601] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:15:58.371 [2024-11-19 14:16:56.793608] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:58.371 [2024-11-19 14:16:56.793613] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:58.371 [2024-11-19 14:16:56.793620] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:15:58.371 [2024-11-19 14:16:56.793625] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:58.371 [2024-11-19 14:16:56.793632] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:58.371 [2024-11-19 14:16:56.793638] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:58.371 [2024-11-19 14:16:56.793644] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:58.371 [2024-11-19 14:16:56.793650] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:58.371 [2024-11-19 14:16:56.793659] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:58.371 [2024-11-19 14:16:56.793664] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:58.371 [2024-11-19 14:16:56.793670] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:58.371 [2024-11-19 14:16:56.793676] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:58.371 [2024-11-19 14:16:56.793684] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:58.371 [2024-11-19 14:16:56.793690] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:58.371 [2024-11-19 14:16:56.793697] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:58.371 [2024-11-19 14:16:56.793704] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:58.371 [2024-11-19 14:16:56.793712] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:15:58.371 [2024-11-19 14:16:56.793719] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:15:58.371 [2024-11-19 14:16:56.793726] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:15:58.371 [2024-11-19 14:16:56.793731] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:15:58.371 [2024-11-19 14:16:56.793738] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:15:58.371 [2024-11-19 14:16:56.793744] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:15:58.371 [2024-11-19 14:16:56.793751] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:15:58.371 [2024-11-19 14:16:56.793757] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:15:58.371 [2024-11-19 14:16:56.793763] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:15:58.371 [2024-11-19 14:16:56.793769] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:15:58.371 [2024-11-19 14:16:56.793775] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:15:58.371 [2024-11-19 14:16:56.793781] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:15:58.372 [2024-11-19 14:16:56.793792] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:15:58.372 [2024-11-19 14:16:56.793797] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:58.372 [2024-11-19 14:16:56.793805] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:58.372 [2024-11-19 14:16:56.793811] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:58.372 [2024-11-19 14:16:56.793818] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:58.372 [2024-11-19 14:16:56.793824] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:58.372 [2024-11-19 14:16:56.793830] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:58.372 [2024-11-19 14:16:56.793836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.372 [2024-11-19 14:16:56.793843] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:58.372 [2024-11-19 14:16:56.793849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.510 ms 00:15:58.372 [2024-11-19 14:16:56.793857] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.372 [2024-11-19 14:16:56.805907] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.372 [2024-11-19 14:16:56.805933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:58.372 [2024-11-19 14:16:56.805940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.964 ms 00:15:58.372 [2024-11-19 14:16:56.805947] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.372 [2024-11-19 14:16:56.806039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.372 [2024-11-19 14:16:56.806051] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:58.372 [2024-11-19 14:16:56.806057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:15:58.372 [2024-11-19 14:16:56.806063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.372 [2024-11-19 14:16:56.830851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.372 [2024-11-19 14:16:56.830977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:58.372 [2024-11-19 14:16:56.831024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.767 ms 00:15:58.372 [2024-11-19 14:16:56.831044] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.372 [2024-11-19 14:16:56.831121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.372 [2024-11-19 14:16:56.831145] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:58.372 [2024-11-19 14:16:56.831162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:15:58.372 [2024-11-19 14:16:56.831182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.372 [2024-11-19 14:16:56.831489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.372 [2024-11-19 14:16:56.831556] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:58.372 [2024-11-19 14:16:56.831591] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:15:58.372 [2024-11-19 14:16:56.831609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.372 [2024-11-19 14:16:56.831708] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.372 [2024-11-19 14:16:56.831728] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:58.372 [2024-11-19 14:16:56.831744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:15:58.372 [2024-11-19 14:16:56.831761] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.372 [2024-11-19 14:16:56.857345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.372 [2024-11-19 14:16:56.857475] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:58.372 [2024-11-19 14:16:56.857534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.551 ms 00:15:58.372 [2024-11-19 14:16:56.857563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.372 [2024-11-19 14:16:56.869121] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:58.372 [2024-11-19 14:16:56.881087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.372 [2024-11-19 14:16:56.881175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:58.372 [2024-11-19 14:16:56.881217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.139 ms 00:15:58.372 [2024-11-19 14:16:56.881235] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.633 [2024-11-19 14:16:56.962684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.633 [2024-11-19 14:16:56.962786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:58.633 [2024-11-19 14:16:56.962846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.388 ms 00:15:58.633 [2024-11-19 14:16:56.962865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.633 [2024-11-19 14:16:56.962954] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:15:58.633 [2024-11-19 14:16:56.962986] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:16:01.937 [2024-11-19 14:16:59.916802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.937 [2024-11-19 14:16:59.917015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:01.937 [2024-11-19 14:16:59.917073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2953.831 ms 00:16:01.937 [2024-11-19 14:16:59.917094] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.937 [2024-11-19 14:16:59.917368] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.937 [2024-11-19 14:16:59.917436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:01.937 [2024-11-19 14:16:59.917459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:16:01.937 [2024-11-19 14:16:59.917478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.937 [2024-11-19 14:16:59.935591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.937 [2024-11-19 14:16:59.935693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:01.937 [2024-11-19 14:16:59.935763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.049 ms 00:16:01.937 [2024-11-19 14:16:59.935783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.937 [2024-11-19 14:16:59.953378] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.937 [2024-11-19 14:16:59.953472] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:01.937 [2024-11-19 14:16:59.953572] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.527 ms 00:16:01.937 [2024-11-19 14:16:59.953588] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.937 [2024-11-19 14:16:59.953854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.937 [2024-11-19 14:16:59.953886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:01.937 [2024-11-19 14:16:59.953905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:16:01.937 [2024-11-19 14:16:59.953959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.937 [2024-11-19 14:17:00.005460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.937 [2024-11-19 14:17:00.005556] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:01.937 [2024-11-19 14:17:00.005648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.459 ms 00:16:01.937 [2024-11-19 14:17:00.005668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.937 [2024-11-19 14:17:00.025187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.937 [2024-11-19 14:17:00.025300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:01.937 [2024-11-19 14:17:00.025356] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.455 ms 00:16:01.937 [2024-11-19 14:17:00.025376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.937 [2024-11-19 14:17:00.028579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.937 [2024-11-19 14:17:00.028669] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:01.937 [2024-11-19 14:17:00.028711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.147 ms 00:16:01.937 [2024-11-19 14:17:00.028729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.937 [2024-11-19 14:17:00.047719] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.937 [2024-11-19 14:17:00.047822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:01.937 [2024-11-19 14:17:00.047871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.941 ms 00:16:01.937 [2024-11-19 14:17:00.047899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.937 [2024-11-19 14:17:00.047959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.937 [2024-11-19 14:17:00.047981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:01.937 [2024-11-19 14:17:00.048126] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:01.937 [2024-11-19 14:17:00.048147] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.937 [2024-11-19 14:17:00.048224] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.937 [2024-11-19 14:17:00.048341] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:01.937 [2024-11-19 14:17:00.048363] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:16:01.937 [2024-11-19 14:17:00.048379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.937 [2024-11-19 14:17:00.049047] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:01.937 [2024-11-19 14:17:00.051635] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3267.975 ms, result 0 00:16:01.937 [2024-11-19 14:17:00.052377] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:01.937 { 00:16:01.937 "name": "ftl0", 00:16:01.937 "uuid": "3df0115d-4eed-4c52-9819-8d435bdfff0b" 00:16:01.937 } 00:16:01.937 14:17:00 -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:01.937 14:17:00 -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:16:01.937 14:17:00 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:01.937 14:17:00 -- common/autotest_common.sh@899 -- # local i 00:16:01.937 14:17:00 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:01.937 14:17:00 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:01.937 14:17:00 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:01.937 14:17:00 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:01.937 [ 00:16:01.937 { 00:16:01.937 "name": "ftl0", 00:16:01.937 "aliases": [ 00:16:01.937 "3df0115d-4eed-4c52-9819-8d435bdfff0b" 00:16:01.937 ], 00:16:01.937 "product_name": "FTL disk", 00:16:01.937 "block_size": 4096, 00:16:01.937 "num_blocks": 23592960, 00:16:01.937 "uuid": "3df0115d-4eed-4c52-9819-8d435bdfff0b", 00:16:01.937 "assigned_rate_limits": { 00:16:01.937 "rw_ios_per_sec": 0, 00:16:01.937 "rw_mbytes_per_sec": 0, 00:16:01.937 "r_mbytes_per_sec": 0, 00:16:01.937 "w_mbytes_per_sec": 0 00:16:01.937 }, 00:16:01.937 "claimed": false, 00:16:01.937 "zoned": false, 00:16:01.937 "supported_io_types": { 00:16:01.937 "read": true, 00:16:01.937 "write": true, 00:16:01.937 "unmap": true, 00:16:01.937 "write_zeroes": true, 00:16:01.937 "flush": true, 00:16:01.937 "reset": false, 00:16:01.937 "compare": false, 00:16:01.937 "compare_and_write": false, 00:16:01.937 "abort": false, 00:16:01.937 "nvme_admin": false, 00:16:01.937 "nvme_io": false 00:16:01.937 }, 00:16:01.937 "driver_specific": { 00:16:01.937 "ftl": { 00:16:01.937 "base_bdev": "69fe9145-3dc3-46a2-9b6a-1f7b129dc816", 00:16:01.937 "cache": "nvc0n1p0" 00:16:01.937 } 00:16:01.937 } 00:16:01.937 } 00:16:01.937 ] 00:16:01.937 14:17:00 -- common/autotest_common.sh@905 -- # return 0 00:16:01.937 14:17:00 -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:01.937 14:17:00 -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:02.198 14:17:00 -- ftl/trim.sh@56 -- # echo ']}' 00:16:02.198 14:17:00 -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:02.460 14:17:00 -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:02.460 { 00:16:02.460 "name": "ftl0", 00:16:02.460 "aliases": [ 00:16:02.460 "3df0115d-4eed-4c52-9819-8d435bdfff0b" 00:16:02.460 ], 00:16:02.460 "product_name": "FTL disk", 00:16:02.460 "block_size": 4096, 00:16:02.460 "num_blocks": 23592960, 00:16:02.460 "uuid": "3df0115d-4eed-4c52-9819-8d435bdfff0b", 00:16:02.460 "assigned_rate_limits": { 00:16:02.460 "rw_ios_per_sec": 0, 00:16:02.460 "rw_mbytes_per_sec": 0, 00:16:02.460 "r_mbytes_per_sec": 0, 00:16:02.460 "w_mbytes_per_sec": 0 00:16:02.460 }, 00:16:02.460 "claimed": false, 00:16:02.460 "zoned": false, 00:16:02.460 "supported_io_types": { 00:16:02.460 "read": true, 00:16:02.460 "write": true, 00:16:02.460 "unmap": true, 00:16:02.460 "write_zeroes": true, 00:16:02.460 "flush": true, 00:16:02.460 "reset": false, 00:16:02.460 "compare": false, 00:16:02.460 "compare_and_write": false, 00:16:02.460 "abort": false, 00:16:02.460 "nvme_admin": false, 00:16:02.460 "nvme_io": false 00:16:02.460 }, 00:16:02.460 "driver_specific": { 00:16:02.460 "ftl": { 00:16:02.460 "base_bdev": "69fe9145-3dc3-46a2-9b6a-1f7b129dc816", 00:16:02.460 "cache": "nvc0n1p0" 00:16:02.460 } 00:16:02.460 } 00:16:02.460 } 00:16:02.460 ]' 00:16:02.460 14:17:00 -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:02.460 14:17:00 -- ftl/trim.sh@60 -- # nb=23592960 00:16:02.460 14:17:00 -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:02.460 [2024-11-19 14:17:01.013020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.460 [2024-11-19 14:17:01.013054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:02.460 [2024-11-19 14:17:01.013064] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:02.460 [2024-11-19 14:17:01.013072] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.460 [2024-11-19 14:17:01.013097] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:02.460 [2024-11-19 14:17:01.014967] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.460 [2024-11-19 14:17:01.014992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:02.460 [2024-11-19 14:17:01.015002] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.856 ms 00:16:02.460 [2024-11-19 14:17:01.015008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.460 [2024-11-19 14:17:01.015416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.460 [2024-11-19 14:17:01.015425] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:02.460 [2024-11-19 14:17:01.015434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.377 ms 00:16:02.460 [2024-11-19 14:17:01.015441] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.460 [2024-11-19 14:17:01.018189] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.460 [2024-11-19 14:17:01.018205] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:02.460 [2024-11-19 14:17:01.018215] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.727 ms 00:16:02.460 [2024-11-19 14:17:01.018224] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.724 [2024-11-19 14:17:01.023466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.724 [2024-11-19 14:17:01.023488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:02.724 [2024-11-19 14:17:01.023498] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.204 ms 00:16:02.724 [2024-11-19 14:17:01.023505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.724 [2024-11-19 14:17:01.042748] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.724 [2024-11-19 14:17:01.042773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:02.724 [2024-11-19 14:17:01.042783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.167 ms 00:16:02.724 [2024-11-19 14:17:01.042789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.724 [2024-11-19 14:17:01.056213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.724 [2024-11-19 14:17:01.056312] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:02.724 [2024-11-19 14:17:01.056328] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.377 ms 00:16:02.724 [2024-11-19 14:17:01.056334] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.724 [2024-11-19 14:17:01.056484] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.724 [2024-11-19 14:17:01.056494] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:02.724 [2024-11-19 14:17:01.056504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:16:02.724 [2024-11-19 14:17:01.056511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.724 [2024-11-19 14:17:01.074976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.724 [2024-11-19 14:17:01.075066] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:02.724 [2024-11-19 14:17:01.075081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.444 ms 00:16:02.724 [2024-11-19 14:17:01.075086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.724 [2024-11-19 14:17:01.093551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.724 [2024-11-19 14:17:01.093576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:02.724 [2024-11-19 14:17:01.093585] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.367 ms 00:16:02.724 [2024-11-19 14:17:01.093591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.724 [2024-11-19 14:17:01.111470] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.724 [2024-11-19 14:17:01.111493] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:02.724 [2024-11-19 14:17:01.111502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.829 ms 00:16:02.724 [2024-11-19 14:17:01.111508] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.724 [2024-11-19 14:17:01.129332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.724 [2024-11-19 14:17:01.129420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:02.724 [2024-11-19 14:17:01.129436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.748 ms 00:16:02.724 [2024-11-19 14:17:01.129442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.724 [2024-11-19 14:17:01.129482] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:02.724 [2024-11-19 14:17:01.129493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:02.724 [2024-11-19 14:17:01.129905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.129911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.129918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.129924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.129934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.129940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.129947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.129953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.129960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.129966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.129973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.129978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.129987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.129993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.130000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.130006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.130012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.130018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.130025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.130030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.130039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.130045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.130052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.130057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.130064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.130070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.130077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.130083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.130090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.130095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.130102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.130108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.130116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.130122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.130129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.130134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.130144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.130150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.130158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.130164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.130170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.130176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.130184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:02.725 [2024-11-19 14:17:01.130196] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:02.725 [2024-11-19 14:17:01.130203] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3df0115d-4eed-4c52-9819-8d435bdfff0b 00:16:02.725 [2024-11-19 14:17:01.130209] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:02.725 [2024-11-19 14:17:01.130216] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:02.725 [2024-11-19 14:17:01.130221] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:02.725 [2024-11-19 14:17:01.130228] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:02.725 [2024-11-19 14:17:01.130233] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:02.725 [2024-11-19 14:17:01.130242] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:02.725 [2024-11-19 14:17:01.130249] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:02.725 [2024-11-19 14:17:01.130256] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:02.725 [2024-11-19 14:17:01.130261] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:02.725 [2024-11-19 14:17:01.130267] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.725 [2024-11-19 14:17:01.130273] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:02.725 [2024-11-19 14:17:01.130281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.787 ms 00:16:02.725 [2024-11-19 14:17:01.130286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.725 [2024-11-19 14:17:01.139750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.725 [2024-11-19 14:17:01.139771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:02.725 [2024-11-19 14:17:01.139780] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.441 ms 00:16:02.725 [2024-11-19 14:17:01.139786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.725 [2024-11-19 14:17:01.139974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.725 [2024-11-19 14:17:01.139994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:02.725 [2024-11-19 14:17:01.140002] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:16:02.725 [2024-11-19 14:17:01.140008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.725 [2024-11-19 14:17:01.174524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.725 [2024-11-19 14:17:01.174550] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:02.725 [2024-11-19 14:17:01.174562] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.725 [2024-11-19 14:17:01.174569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.725 [2024-11-19 14:17:01.174645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.725 [2024-11-19 14:17:01.174652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:02.725 [2024-11-19 14:17:01.174660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.725 [2024-11-19 14:17:01.174666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.725 [2024-11-19 14:17:01.174712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.725 [2024-11-19 14:17:01.174720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:02.725 [2024-11-19 14:17:01.174727] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.725 [2024-11-19 14:17:01.174733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.725 [2024-11-19 14:17:01.174758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.725 [2024-11-19 14:17:01.174764] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:02.725 [2024-11-19 14:17:01.174772] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.725 [2024-11-19 14:17:01.174777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.725 [2024-11-19 14:17:01.240848] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.725 [2024-11-19 14:17:01.240899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:02.725 [2024-11-19 14:17:01.240914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.725 [2024-11-19 14:17:01.240923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.725 [2024-11-19 14:17:01.263438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.725 [2024-11-19 14:17:01.263550] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:02.725 [2024-11-19 14:17:01.263604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.725 [2024-11-19 14:17:01.263610] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.725 [2024-11-19 14:17:01.263670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.725 [2024-11-19 14:17:01.263677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:02.725 [2024-11-19 14:17:01.263685] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.725 [2024-11-19 14:17:01.263690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.725 [2024-11-19 14:17:01.263731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.725 [2024-11-19 14:17:01.263740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:02.725 [2024-11-19 14:17:01.263746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.725 [2024-11-19 14:17:01.263763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.725 [2024-11-19 14:17:01.263851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.725 [2024-11-19 14:17:01.263860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:02.725 [2024-11-19 14:17:01.263870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.725 [2024-11-19 14:17:01.263892] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.726 [2024-11-19 14:17:01.263936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.726 [2024-11-19 14:17:01.263943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:02.726 [2024-11-19 14:17:01.263952] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.726 [2024-11-19 14:17:01.263958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.726 [2024-11-19 14:17:01.264000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.726 [2024-11-19 14:17:01.264007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:02.726 [2024-11-19 14:17:01.264015] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.726 [2024-11-19 14:17:01.264021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.726 [2024-11-19 14:17:01.264066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.726 [2024-11-19 14:17:01.264075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:02.726 [2024-11-19 14:17:01.264082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.726 [2024-11-19 14:17:01.264088] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.726 [2024-11-19 14:17:01.264226] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 251.188 ms, result 0 00:16:02.726 true 00:16:02.987 14:17:01 -- ftl/trim.sh@63 -- # killprocess 71967 00:16:02.987 14:17:01 -- common/autotest_common.sh@936 -- # '[' -z 71967 ']' 00:16:02.987 14:17:01 -- common/autotest_common.sh@940 -- # kill -0 71967 00:16:02.987 14:17:01 -- common/autotest_common.sh@941 -- # uname 00:16:02.987 14:17:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:02.987 14:17:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71967 00:16:02.987 killing process with pid 71967 00:16:02.987 14:17:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:02.987 14:17:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:02.987 14:17:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71967' 00:16:02.987 14:17:01 -- common/autotest_common.sh@955 -- # kill 71967 00:16:02.987 14:17:01 -- common/autotest_common.sh@960 -- # wait 71967 00:16:12.983 14:17:10 -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:16:12.983 65536+0 records in 00:16:12.983 65536+0 records out 00:16:12.983 268435456 bytes (268 MB, 256 MiB) copied, 1.10163 s, 244 MB/s 00:16:12.983 14:17:11 -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:12.983 [2024-11-19 14:17:11.390524] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:12.983 [2024-11-19 14:17:11.390618] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72228 ] 00:16:12.983 [2024-11-19 14:17:11.536505] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.245 [2024-11-19 14:17:11.716214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.506 [2024-11-19 14:17:11.977531] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:13.506 [2024-11-19 14:17:11.978827] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:13.769 [2024-11-19 14:17:12.126156] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.769 [2024-11-19 14:17:12.126201] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:13.769 [2024-11-19 14:17:12.126215] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:13.769 [2024-11-19 14:17:12.126222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.769 [2024-11-19 14:17:12.128890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.769 [2024-11-19 14:17:12.128926] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:13.769 [2024-11-19 14:17:12.128936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.649 ms 00:16:13.769 [2024-11-19 14:17:12.128943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.769 [2024-11-19 14:17:12.129022] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:13.769 [2024-11-19 14:17:12.129737] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:13.769 [2024-11-19 14:17:12.129757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.769 [2024-11-19 14:17:12.129765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:13.769 [2024-11-19 14:17:12.129774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.743 ms 00:16:13.769 [2024-11-19 14:17:12.129781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.769 [2024-11-19 14:17:12.130949] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:13.769 [2024-11-19 14:17:12.143808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.769 [2024-11-19 14:17:12.143844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:13.769 [2024-11-19 14:17:12.143855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.861 ms 00:16:13.769 [2024-11-19 14:17:12.143862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.769 [2024-11-19 14:17:12.143979] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.769 [2024-11-19 14:17:12.143992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:13.769 [2024-11-19 14:17:12.144000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:16:13.769 [2024-11-19 14:17:12.144007] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.769 [2024-11-19 14:17:12.149279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.769 [2024-11-19 14:17:12.149308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:13.769 [2024-11-19 14:17:12.149317] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.229 ms 00:16:13.769 [2024-11-19 14:17:12.149328] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.769 [2024-11-19 14:17:12.149426] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.769 [2024-11-19 14:17:12.149436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:13.769 [2024-11-19 14:17:12.149445] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:16:13.770 [2024-11-19 14:17:12.149453] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.770 [2024-11-19 14:17:12.149476] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.770 [2024-11-19 14:17:12.149485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:13.770 [2024-11-19 14:17:12.149493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:13.770 [2024-11-19 14:17:12.149500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.770 [2024-11-19 14:17:12.149528] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:13.770 [2024-11-19 14:17:12.153121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.770 [2024-11-19 14:17:12.153149] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:13.770 [2024-11-19 14:17:12.153158] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.606 ms 00:16:13.770 [2024-11-19 14:17:12.153168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.770 [2024-11-19 14:17:12.153204] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.770 [2024-11-19 14:17:12.153212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:13.770 [2024-11-19 14:17:12.153221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:13.770 [2024-11-19 14:17:12.153228] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.770 [2024-11-19 14:17:12.153245] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:13.770 [2024-11-19 14:17:12.153263] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:13.770 [2024-11-19 14:17:12.153295] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:13.770 [2024-11-19 14:17:12.153312] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:13.770 [2024-11-19 14:17:12.153384] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:13.770 [2024-11-19 14:17:12.153394] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:13.770 [2024-11-19 14:17:12.153404] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:13.770 [2024-11-19 14:17:12.153413] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:13.770 [2024-11-19 14:17:12.153422] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:13.770 [2024-11-19 14:17:12.153431] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:13.770 [2024-11-19 14:17:12.153438] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:13.770 [2024-11-19 14:17:12.153445] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:13.770 [2024-11-19 14:17:12.153455] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:13.770 [2024-11-19 14:17:12.153462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.770 [2024-11-19 14:17:12.153469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:13.770 [2024-11-19 14:17:12.153477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:16:13.770 [2024-11-19 14:17:12.153484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.770 [2024-11-19 14:17:12.153558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.770 [2024-11-19 14:17:12.153568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:13.770 [2024-11-19 14:17:12.153575] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:16:13.770 [2024-11-19 14:17:12.153582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.770 [2024-11-19 14:17:12.153658] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:13.770 [2024-11-19 14:17:12.153669] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:13.770 [2024-11-19 14:17:12.153676] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:13.770 [2024-11-19 14:17:12.153684] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:13.770 [2024-11-19 14:17:12.153692] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:13.770 [2024-11-19 14:17:12.153699] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:13.770 [2024-11-19 14:17:12.153705] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:13.770 [2024-11-19 14:17:12.153713] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:13.770 [2024-11-19 14:17:12.153720] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:13.770 [2024-11-19 14:17:12.153727] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:13.770 [2024-11-19 14:17:12.153734] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:13.770 [2024-11-19 14:17:12.153740] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:13.770 [2024-11-19 14:17:12.153747] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:13.770 [2024-11-19 14:17:12.153754] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:13.770 [2024-11-19 14:17:12.153767] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:13.770 [2024-11-19 14:17:12.153774] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:13.770 [2024-11-19 14:17:12.153780] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:13.770 [2024-11-19 14:17:12.153786] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:13.770 [2024-11-19 14:17:12.153793] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:13.770 [2024-11-19 14:17:12.153800] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:13.770 [2024-11-19 14:17:12.153806] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:13.770 [2024-11-19 14:17:12.153812] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:13.770 [2024-11-19 14:17:12.153819] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:13.770 [2024-11-19 14:17:12.153826] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:13.770 [2024-11-19 14:17:12.153832] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:13.770 [2024-11-19 14:17:12.153838] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:13.770 [2024-11-19 14:17:12.153844] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:13.770 [2024-11-19 14:17:12.153850] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:13.770 [2024-11-19 14:17:12.153857] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:13.770 [2024-11-19 14:17:12.153863] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:13.770 [2024-11-19 14:17:12.153869] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:13.770 [2024-11-19 14:17:12.153896] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:13.770 [2024-11-19 14:17:12.153903] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:13.770 [2024-11-19 14:17:12.153909] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:13.770 [2024-11-19 14:17:12.153915] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:13.770 [2024-11-19 14:17:12.153922] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:13.770 [2024-11-19 14:17:12.153929] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:13.770 [2024-11-19 14:17:12.153935] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:13.770 [2024-11-19 14:17:12.153941] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:13.770 [2024-11-19 14:17:12.153947] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:13.770 [2024-11-19 14:17:12.153953] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:13.770 [2024-11-19 14:17:12.153960] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:13.770 [2024-11-19 14:17:12.153967] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:13.770 [2024-11-19 14:17:12.153977] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:13.770 [2024-11-19 14:17:12.153985] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:13.770 [2024-11-19 14:17:12.153992] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:13.770 [2024-11-19 14:17:12.153999] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:13.770 [2024-11-19 14:17:12.154006] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:13.770 [2024-11-19 14:17:12.154013] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:13.770 [2024-11-19 14:17:12.154020] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:13.770 [2024-11-19 14:17:12.154028] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:13.770 [2024-11-19 14:17:12.154039] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:13.770 [2024-11-19 14:17:12.154047] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:13.770 [2024-11-19 14:17:12.154054] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:13.770 [2024-11-19 14:17:12.154061] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:13.770 [2024-11-19 14:17:12.154068] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:13.770 [2024-11-19 14:17:12.154075] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:13.770 [2024-11-19 14:17:12.154089] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:13.770 [2024-11-19 14:17:12.154096] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:13.770 [2024-11-19 14:17:12.154103] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:13.770 [2024-11-19 14:17:12.154110] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:13.770 [2024-11-19 14:17:12.154116] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:13.770 [2024-11-19 14:17:12.154124] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:13.770 [2024-11-19 14:17:12.154131] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:13.771 [2024-11-19 14:17:12.154138] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:13.771 [2024-11-19 14:17:12.154145] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:13.771 [2024-11-19 14:17:12.154159] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:13.771 [2024-11-19 14:17:12.154168] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:13.771 [2024-11-19 14:17:12.154175] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:13.771 [2024-11-19 14:17:12.154182] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:13.771 [2024-11-19 14:17:12.154188] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:13.771 [2024-11-19 14:17:12.154196] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.771 [2024-11-19 14:17:12.154203] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:13.771 [2024-11-19 14:17:12.154210] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.580 ms 00:16:13.771 [2024-11-19 14:17:12.154218] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.771 [2024-11-19 14:17:12.169225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.771 [2024-11-19 14:17:12.169259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:13.771 [2024-11-19 14:17:12.169269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.953 ms 00:16:13.771 [2024-11-19 14:17:12.169277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.771 [2024-11-19 14:17:12.169392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.771 [2024-11-19 14:17:12.169401] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:13.771 [2024-11-19 14:17:12.169410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:16:13.771 [2024-11-19 14:17:12.169418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.771 [2024-11-19 14:17:12.210496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.771 [2024-11-19 14:17:12.210660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:13.771 [2024-11-19 14:17:12.210679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.056 ms 00:16:13.771 [2024-11-19 14:17:12.210688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.771 [2024-11-19 14:17:12.210762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.771 [2024-11-19 14:17:12.210773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:13.771 [2024-11-19 14:17:12.210787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:13.771 [2024-11-19 14:17:12.210794] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.771 [2024-11-19 14:17:12.211182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.771 [2024-11-19 14:17:12.211200] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:13.771 [2024-11-19 14:17:12.211211] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:16:13.771 [2024-11-19 14:17:12.211219] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.771 [2024-11-19 14:17:12.211368] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.771 [2024-11-19 14:17:12.211379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:13.771 [2024-11-19 14:17:12.211388] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:16:13.771 [2024-11-19 14:17:12.211396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.771 [2024-11-19 14:17:12.226370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.771 [2024-11-19 14:17:12.226404] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:13.771 [2024-11-19 14:17:12.226414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.950 ms 00:16:13.771 [2024-11-19 14:17:12.226424] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.771 [2024-11-19 14:17:12.239788] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:13.771 [2024-11-19 14:17:12.239835] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:13.771 [2024-11-19 14:17:12.239846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.771 [2024-11-19 14:17:12.239854] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:13.771 [2024-11-19 14:17:12.239864] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.319 ms 00:16:13.771 [2024-11-19 14:17:12.239871] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.771 [2024-11-19 14:17:12.265140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.771 [2024-11-19 14:17:12.265317] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:13.771 [2024-11-19 14:17:12.265346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.168 ms 00:16:13.771 [2024-11-19 14:17:12.265355] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.771 [2024-11-19 14:17:12.277906] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.771 [2024-11-19 14:17:12.277950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:13.771 [2024-11-19 14:17:12.277962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.468 ms 00:16:13.771 [2024-11-19 14:17:12.277980] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.771 [2024-11-19 14:17:12.290491] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.771 [2024-11-19 14:17:12.290539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:13.771 [2024-11-19 14:17:12.290551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.424 ms 00:16:13.771 [2024-11-19 14:17:12.290559] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.771 [2024-11-19 14:17:12.290999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.771 [2024-11-19 14:17:12.291015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:13.771 [2024-11-19 14:17:12.291025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:16:13.771 [2024-11-19 14:17:12.291033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.033 [2024-11-19 14:17:12.359530] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.033 [2024-11-19 14:17:12.359588] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:14.033 [2024-11-19 14:17:12.359604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.470 ms 00:16:14.033 [2024-11-19 14:17:12.359612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.033 [2024-11-19 14:17:12.371148] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:14.033 [2024-11-19 14:17:12.390931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.033 [2024-11-19 14:17:12.390978] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:14.033 [2024-11-19 14:17:12.390991] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.197 ms 00:16:14.033 [2024-11-19 14:17:12.391000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.033 [2024-11-19 14:17:12.391090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.033 [2024-11-19 14:17:12.391102] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:14.033 [2024-11-19 14:17:12.391112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:14.033 [2024-11-19 14:17:12.391124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.033 [2024-11-19 14:17:12.391182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.033 [2024-11-19 14:17:12.391197] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:14.033 [2024-11-19 14:17:12.391205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:14.033 [2024-11-19 14:17:12.391214] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.033 [2024-11-19 14:17:12.392659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.033 [2024-11-19 14:17:12.392713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:14.033 [2024-11-19 14:17:12.392725] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.395 ms 00:16:14.033 [2024-11-19 14:17:12.392732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.033 [2024-11-19 14:17:12.392775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.033 [2024-11-19 14:17:12.392786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:14.033 [2024-11-19 14:17:12.392804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:14.033 [2024-11-19 14:17:12.392813] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.033 [2024-11-19 14:17:12.392851] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:14.033 [2024-11-19 14:17:12.392862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.033 [2024-11-19 14:17:12.392870] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:14.033 [2024-11-19 14:17:12.392897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:14.033 [2024-11-19 14:17:12.392905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.033 [2024-11-19 14:17:12.419037] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.033 [2024-11-19 14:17:12.419098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:14.033 [2024-11-19 14:17:12.419113] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.105 ms 00:16:14.033 [2024-11-19 14:17:12.419122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.033 [2024-11-19 14:17:12.419257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.033 [2024-11-19 14:17:12.419269] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:14.033 [2024-11-19 14:17:12.419280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:16:14.033 [2024-11-19 14:17:12.419288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.033 [2024-11-19 14:17:12.420552] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:14.033 [2024-11-19 14:17:12.424267] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 294.031 ms, result 0 00:16:14.033 [2024-11-19 14:17:12.425649] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:14.033 [2024-11-19 14:17:12.439804] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:14.975  [2024-11-19T14:17:14.479Z] Copying: 13/256 [MB] (13 MBps) [2024-11-19T14:17:15.488Z] Copying: 26/256 [MB] (13 MBps) [2024-11-19T14:17:16.874Z] Copying: 45/256 [MB] (18 MBps) [2024-11-19T14:17:17.476Z] Copying: 61/256 [MB] (16 MBps) [2024-11-19T14:17:18.865Z] Copying: 74/256 [MB] (12 MBps) [2024-11-19T14:17:19.808Z] Copying: 85/256 [MB] (11 MBps) [2024-11-19T14:17:20.753Z] Copying: 98/256 [MB] (13 MBps) [2024-11-19T14:17:21.697Z] Copying: 109/256 [MB] (10 MBps) [2024-11-19T14:17:22.640Z] Copying: 121/256 [MB] (12 MBps) [2024-11-19T14:17:23.584Z] Copying: 137/256 [MB] (15 MBps) [2024-11-19T14:17:24.529Z] Copying: 156/256 [MB] (19 MBps) [2024-11-19T14:17:25.474Z] Copying: 170/256 [MB] (13 MBps) [2024-11-19T14:17:26.863Z] Copying: 187/256 [MB] (17 MBps) [2024-11-19T14:17:27.808Z] Copying: 203/256 [MB] (15 MBps) [2024-11-19T14:17:28.753Z] Copying: 216/256 [MB] (12 MBps) [2024-11-19T14:17:29.708Z] Copying: 230/256 [MB] (13 MBps) [2024-11-19T14:17:30.283Z] Copying: 245/256 [MB] (15 MBps) [2024-11-19T14:17:30.283Z] Copying: 256/256 [MB] (average 14 MBps)[2024-11-19 14:17:30.119145] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:31.721 [2024-11-19 14:17:30.126331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.721 [2024-11-19 14:17:30.126465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:31.721 [2024-11-19 14:17:30.126487] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:31.721 [2024-11-19 14:17:30.126493] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.721 [2024-11-19 14:17:30.126512] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:31.721 [2024-11-19 14:17:30.128578] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.721 [2024-11-19 14:17:30.128601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:31.721 [2024-11-19 14:17:30.128609] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.055 ms 00:16:31.721 [2024-11-19 14:17:30.128616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.721 [2024-11-19 14:17:30.130527] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.721 [2024-11-19 14:17:30.130551] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:31.721 [2024-11-19 14:17:30.130558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.893 ms 00:16:31.721 [2024-11-19 14:17:30.130564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.721 [2024-11-19 14:17:30.136664] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.721 [2024-11-19 14:17:30.136689] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:31.721 [2024-11-19 14:17:30.136696] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.084 ms 00:16:31.721 [2024-11-19 14:17:30.136702] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.721 [2024-11-19 14:17:30.142153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.721 [2024-11-19 14:17:30.142174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:31.721 [2024-11-19 14:17:30.142182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.419 ms 00:16:31.721 [2024-11-19 14:17:30.142189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.721 [2024-11-19 14:17:30.159649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.721 [2024-11-19 14:17:30.159672] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:31.721 [2024-11-19 14:17:30.159681] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.414 ms 00:16:31.721 [2024-11-19 14:17:30.159687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.721 [2024-11-19 14:17:30.171425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.721 [2024-11-19 14:17:30.171530] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:31.721 [2024-11-19 14:17:30.171543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.704 ms 00:16:31.721 [2024-11-19 14:17:30.171549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.721 [2024-11-19 14:17:30.171648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.721 [2024-11-19 14:17:30.171655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:31.721 [2024-11-19 14:17:30.171662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:16:31.721 [2024-11-19 14:17:30.171667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.721 [2024-11-19 14:17:30.190300] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.721 [2024-11-19 14:17:30.190395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:31.721 [2024-11-19 14:17:30.190407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.621 ms 00:16:31.721 [2024-11-19 14:17:30.190412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.721 [2024-11-19 14:17:30.207953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.721 [2024-11-19 14:17:30.207975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:31.721 [2024-11-19 14:17:30.207983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.508 ms 00:16:31.721 [2024-11-19 14:17:30.207988] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.721 [2024-11-19 14:17:30.225810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.721 [2024-11-19 14:17:30.225916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:31.721 [2024-11-19 14:17:30.225928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.789 ms 00:16:31.721 [2024-11-19 14:17:30.225933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.721 [2024-11-19 14:17:30.243987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.721 [2024-11-19 14:17:30.244072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:31.721 [2024-11-19 14:17:30.244111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.004 ms 00:16:31.721 [2024-11-19 14:17:30.244127] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.721 [2024-11-19 14:17:30.244166] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:31.721 [2024-11-19 14:17:30.244188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.244211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.244233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.244255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.244309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.244332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.244353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.244375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.244396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.244419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.244440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.244515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.244539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.244561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.244583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.244604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.244625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.244647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.244669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.244690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.244711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.244732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.244755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.244799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.244823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.244845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.244867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.244906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.244957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.244981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.245003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.245027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.245049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.245087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.245113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.245134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.245155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.245178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.245198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.245260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:31.721 [2024-11-19 14:17:30.245302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.245325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.245347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.245369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.245390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.245429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.245475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.245499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.245538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.245560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.245582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.245603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.245652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.245675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.245697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.245718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.245759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.245782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.245824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.245848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.245870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.245925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.245949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.245998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.246020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.246058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.246081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.246104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.246149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.246173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.246195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.246216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.246257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.246302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.246345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.246368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.246390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.246412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.246433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.246479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.246503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.246524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.246545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.246567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.246588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.246632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.246656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.246677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.246698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.246720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.246742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.246790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.246812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.246833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.246855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.246883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.246906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.246964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.246989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.247010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:31.722 [2024-11-19 14:17:30.247037] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:31.722 [2024-11-19 14:17:30.247053] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3df0115d-4eed-4c52-9819-8d435bdfff0b 00:16:31.722 [2024-11-19 14:17:30.247075] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:31.722 [2024-11-19 14:17:30.247089] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:31.722 [2024-11-19 14:17:30.247102] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:31.722 [2024-11-19 14:17:30.247145] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:31.722 [2024-11-19 14:17:30.247162] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:31.722 [2024-11-19 14:17:30.247176] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:31.722 [2024-11-19 14:17:30.247194] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:31.722 [2024-11-19 14:17:30.247208] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:31.722 [2024-11-19 14:17:30.247220] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:31.722 [2024-11-19 14:17:30.247249] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.722 [2024-11-19 14:17:30.247264] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:31.722 [2024-11-19 14:17:30.247281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.084 ms 00:16:31.722 [2024-11-19 14:17:30.247294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.722 [2024-11-19 14:17:30.256716] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.722 [2024-11-19 14:17:30.256800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:31.722 [2024-11-19 14:17:30.256840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.388 ms 00:16:31.722 [2024-11-19 14:17:30.256860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.722 [2024-11-19 14:17:30.257028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.722 [2024-11-19 14:17:30.257131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:31.722 [2024-11-19 14:17:30.257150] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:16:31.722 [2024-11-19 14:17:30.257164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.984 [2024-11-19 14:17:30.286185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.984 [2024-11-19 14:17:30.286272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:31.984 [2024-11-19 14:17:30.286309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.984 [2024-11-19 14:17:30.286330] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.984 [2024-11-19 14:17:30.286397] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.984 [2024-11-19 14:17:30.286414] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:31.984 [2024-11-19 14:17:30.286429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.984 [2024-11-19 14:17:30.286443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.984 [2024-11-19 14:17:30.286482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.984 [2024-11-19 14:17:30.286500] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:31.984 [2024-11-19 14:17:30.286549] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.984 [2024-11-19 14:17:30.286566] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.984 [2024-11-19 14:17:30.286594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.984 [2024-11-19 14:17:30.286611] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:31.984 [2024-11-19 14:17:30.286625] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.984 [2024-11-19 14:17:30.286640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.984 [2024-11-19 14:17:30.342810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.984 [2024-11-19 14:17:30.342925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:31.984 [2024-11-19 14:17:30.342963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.984 [2024-11-19 14:17:30.342985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.984 [2024-11-19 14:17:30.365205] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.984 [2024-11-19 14:17:30.365292] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:31.984 [2024-11-19 14:17:30.365327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.984 [2024-11-19 14:17:30.365343] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.984 [2024-11-19 14:17:30.365390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.984 [2024-11-19 14:17:30.365407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:31.984 [2024-11-19 14:17:30.365421] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.984 [2024-11-19 14:17:30.365435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.984 [2024-11-19 14:17:30.365465] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.984 [2024-11-19 14:17:30.365485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:31.984 [2024-11-19 14:17:30.365501] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.984 [2024-11-19 14:17:30.365545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.984 [2024-11-19 14:17:30.365626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.984 [2024-11-19 14:17:30.365712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:31.984 [2024-11-19 14:17:30.365731] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.984 [2024-11-19 14:17:30.365761] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.984 [2024-11-19 14:17:30.365803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.984 [2024-11-19 14:17:30.365824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:31.984 [2024-11-19 14:17:30.365839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.984 [2024-11-19 14:17:30.365853] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.984 [2024-11-19 14:17:30.365907] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.984 [2024-11-19 14:17:30.365927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:31.984 [2024-11-19 14:17:30.365943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.984 [2024-11-19 14:17:30.365957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.984 [2024-11-19 14:17:30.366001] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.984 [2024-11-19 14:17:30.366022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:31.984 [2024-11-19 14:17:30.366081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.984 [2024-11-19 14:17:30.366098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.984 [2024-11-19 14:17:30.366214] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 239.872 ms, result 0 00:16:32.557 00:16:32.557 00:16:32.818 14:17:31 -- ftl/trim.sh@72 -- # svcpid=72438 00:16:32.818 14:17:31 -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:32.818 14:17:31 -- ftl/trim.sh@73 -- # waitforlisten 72438 00:16:32.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.818 14:17:31 -- common/autotest_common.sh@829 -- # '[' -z 72438 ']' 00:16:32.818 14:17:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.818 14:17:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:32.818 14:17:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.818 14:17:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:32.818 14:17:31 -- common/autotest_common.sh@10 -- # set +x 00:16:32.818 [2024-11-19 14:17:31.206810] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:32.818 [2024-11-19 14:17:31.207084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72438 ] 00:16:32.818 [2024-11-19 14:17:31.354504] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.077 [2024-11-19 14:17:31.491392] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:33.077 [2024-11-19 14:17:31.491685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.646 14:17:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:33.646 14:17:31 -- common/autotest_common.sh@862 -- # return 0 00:16:33.646 14:17:31 -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:33.646 [2024-11-19 14:17:32.153927] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:33.646 [2024-11-19 14:17:32.153966] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:33.908 [2024-11-19 14:17:32.286993] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.908 [2024-11-19 14:17:32.287025] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:33.908 [2024-11-19 14:17:32.287037] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:33.908 [2024-11-19 14:17:32.287043] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.908 [2024-11-19 14:17:32.289071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.908 [2024-11-19 14:17:32.289194] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:33.908 [2024-11-19 14:17:32.289210] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.012 ms 00:16:33.908 [2024-11-19 14:17:32.289216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.908 [2024-11-19 14:17:32.289276] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:33.908 [2024-11-19 14:17:32.289819] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:33.908 [2024-11-19 14:17:32.289839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.908 [2024-11-19 14:17:32.289846] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:33.908 [2024-11-19 14:17:32.289854] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:16:33.908 [2024-11-19 14:17:32.289860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.908 [2024-11-19 14:17:32.290808] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:33.908 [2024-11-19 14:17:32.301069] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.908 [2024-11-19 14:17:32.301097] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:33.908 [2024-11-19 14:17:32.301106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.265 ms 00:16:33.908 [2024-11-19 14:17:32.301113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.908 [2024-11-19 14:17:32.301175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.908 [2024-11-19 14:17:32.301185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:33.908 [2024-11-19 14:17:32.301192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:33.908 [2024-11-19 14:17:32.301199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.908 [2024-11-19 14:17:32.305461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.908 [2024-11-19 14:17:32.305488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:33.908 [2024-11-19 14:17:32.305497] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.223 ms 00:16:33.908 [2024-11-19 14:17:32.305503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.908 [2024-11-19 14:17:32.305564] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.908 [2024-11-19 14:17:32.305573] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:33.908 [2024-11-19 14:17:32.305579] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:16:33.908 [2024-11-19 14:17:32.305586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.908 [2024-11-19 14:17:32.305605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.908 [2024-11-19 14:17:32.305613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:33.908 [2024-11-19 14:17:32.305619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:33.908 [2024-11-19 14:17:32.305627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.908 [2024-11-19 14:17:32.305648] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:33.908 [2024-11-19 14:17:32.308382] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.908 [2024-11-19 14:17:32.308500] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:33.908 [2024-11-19 14:17:32.308515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.739 ms 00:16:33.908 [2024-11-19 14:17:32.308521] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.908 [2024-11-19 14:17:32.308557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.908 [2024-11-19 14:17:32.308564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:33.908 [2024-11-19 14:17:32.308571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:33.908 [2024-11-19 14:17:32.308579] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.908 [2024-11-19 14:17:32.308595] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:33.908 [2024-11-19 14:17:32.308608] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:33.908 [2024-11-19 14:17:32.308635] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:33.908 [2024-11-19 14:17:32.308646] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:33.909 [2024-11-19 14:17:32.308703] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:33.909 [2024-11-19 14:17:32.308710] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:33.909 [2024-11-19 14:17:32.308722] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:33.909 [2024-11-19 14:17:32.308730] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:33.909 [2024-11-19 14:17:32.308738] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:33.909 [2024-11-19 14:17:32.308744] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:33.909 [2024-11-19 14:17:32.308750] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:33.909 [2024-11-19 14:17:32.308756] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:33.909 [2024-11-19 14:17:32.308765] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:33.909 [2024-11-19 14:17:32.308770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.909 [2024-11-19 14:17:32.308777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:33.909 [2024-11-19 14:17:32.308782] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:16:33.909 [2024-11-19 14:17:32.308789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.909 [2024-11-19 14:17:32.308839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.909 [2024-11-19 14:17:32.308847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:33.909 [2024-11-19 14:17:32.308853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:16:33.909 [2024-11-19 14:17:32.308859] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.909 [2024-11-19 14:17:32.308932] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:33.909 [2024-11-19 14:17:32.308942] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:33.909 [2024-11-19 14:17:32.308948] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:33.909 [2024-11-19 14:17:32.308956] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:33.909 [2024-11-19 14:17:32.308961] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:33.909 [2024-11-19 14:17:32.308968] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:33.909 [2024-11-19 14:17:32.308973] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:33.909 [2024-11-19 14:17:32.308983] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:33.909 [2024-11-19 14:17:32.308989] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:33.909 [2024-11-19 14:17:32.308996] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:33.909 [2024-11-19 14:17:32.309002] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:33.909 [2024-11-19 14:17:32.309009] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:33.909 [2024-11-19 14:17:32.309015] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:33.909 [2024-11-19 14:17:32.309021] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:33.909 [2024-11-19 14:17:32.309026] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:33.909 [2024-11-19 14:17:32.309032] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:33.909 [2024-11-19 14:17:32.309037] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:33.909 [2024-11-19 14:17:32.309043] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:33.909 [2024-11-19 14:17:32.309048] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:33.909 [2024-11-19 14:17:32.309054] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:33.909 [2024-11-19 14:17:32.309059] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:33.909 [2024-11-19 14:17:32.309065] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:33.909 [2024-11-19 14:17:32.309070] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:33.909 [2024-11-19 14:17:32.309078] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:33.909 [2024-11-19 14:17:32.309082] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:33.909 [2024-11-19 14:17:32.309093] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:33.909 [2024-11-19 14:17:32.309098] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:33.909 [2024-11-19 14:17:32.309104] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:33.909 [2024-11-19 14:17:32.309108] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:33.909 [2024-11-19 14:17:32.309115] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:33.909 [2024-11-19 14:17:32.309120] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:33.909 [2024-11-19 14:17:32.309127] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:33.909 [2024-11-19 14:17:32.309132] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:33.909 [2024-11-19 14:17:32.309137] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:33.909 [2024-11-19 14:17:32.309142] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:33.909 [2024-11-19 14:17:32.309148] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:33.909 [2024-11-19 14:17:32.309154] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:33.909 [2024-11-19 14:17:32.309160] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:33.909 [2024-11-19 14:17:32.309165] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:33.909 [2024-11-19 14:17:32.309171] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:33.909 [2024-11-19 14:17:32.309176] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:33.909 [2024-11-19 14:17:32.309184] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:33.909 [2024-11-19 14:17:32.309189] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:33.909 [2024-11-19 14:17:32.309197] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:33.909 [2024-11-19 14:17:32.309204] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:33.909 [2024-11-19 14:17:32.309211] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:33.909 [2024-11-19 14:17:32.309216] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:33.909 [2024-11-19 14:17:32.309222] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:33.909 [2024-11-19 14:17:32.309226] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:33.909 [2024-11-19 14:17:32.309233] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:33.909 [2024-11-19 14:17:32.309239] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:33.909 [2024-11-19 14:17:32.309248] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:33.909 [2024-11-19 14:17:32.309254] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:33.909 [2024-11-19 14:17:32.309260] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:33.909 [2024-11-19 14:17:32.309266] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:33.909 [2024-11-19 14:17:32.309274] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:33.909 [2024-11-19 14:17:32.309280] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:33.909 [2024-11-19 14:17:32.309287] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:33.909 [2024-11-19 14:17:32.309292] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:33.909 [2024-11-19 14:17:32.309299] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:33.909 [2024-11-19 14:17:32.309304] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:33.909 [2024-11-19 14:17:32.309310] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:33.909 [2024-11-19 14:17:32.309316] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:33.909 [2024-11-19 14:17:32.309323] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:33.909 [2024-11-19 14:17:32.309329] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:33.909 [2024-11-19 14:17:32.309335] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:33.909 [2024-11-19 14:17:32.309341] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:33.909 [2024-11-19 14:17:32.309348] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:33.909 [2024-11-19 14:17:32.309353] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:33.909 [2024-11-19 14:17:32.309361] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:33.909 [2024-11-19 14:17:32.309366] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:33.909 [2024-11-19 14:17:32.309375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.909 [2024-11-19 14:17:32.309381] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:33.909 [2024-11-19 14:17:32.309389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.473 ms 00:16:33.909 [2024-11-19 14:17:32.309395] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.909 [2024-11-19 14:17:32.321232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.909 [2024-11-19 14:17:32.321256] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:33.909 [2024-11-19 14:17:32.321267] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.797 ms 00:16:33.909 [2024-11-19 14:17:32.321275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.909 [2024-11-19 14:17:32.321362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.909 [2024-11-19 14:17:32.321369] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:33.910 [2024-11-19 14:17:32.321376] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:16:33.910 [2024-11-19 14:17:32.321382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.910 [2024-11-19 14:17:32.345292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.910 [2024-11-19 14:17:32.345317] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:33.910 [2024-11-19 14:17:32.345327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.894 ms 00:16:33.910 [2024-11-19 14:17:32.345333] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.910 [2024-11-19 14:17:32.345376] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.910 [2024-11-19 14:17:32.345385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:33.910 [2024-11-19 14:17:32.345393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:33.910 [2024-11-19 14:17:32.345400] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.910 [2024-11-19 14:17:32.345674] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.910 [2024-11-19 14:17:32.345685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:33.910 [2024-11-19 14:17:32.345696] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:16:33.910 [2024-11-19 14:17:32.345702] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.910 [2024-11-19 14:17:32.345790] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.910 [2024-11-19 14:17:32.345797] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:33.910 [2024-11-19 14:17:32.345806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:16:33.910 [2024-11-19 14:17:32.345812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.910 [2024-11-19 14:17:32.357559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.910 [2024-11-19 14:17:32.357582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:33.910 [2024-11-19 14:17:32.357592] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.730 ms 00:16:33.910 [2024-11-19 14:17:32.357597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.910 [2024-11-19 14:17:32.367388] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:33.910 [2024-11-19 14:17:32.367491] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:33.910 [2024-11-19 14:17:32.367505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.910 [2024-11-19 14:17:32.367511] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:33.910 [2024-11-19 14:17:32.367519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.832 ms 00:16:33.910 [2024-11-19 14:17:32.367524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.910 [2024-11-19 14:17:32.386246] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.910 [2024-11-19 14:17:32.386341] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:33.910 [2024-11-19 14:17:32.386357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.680 ms 00:16:33.910 [2024-11-19 14:17:32.386363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.910 [2024-11-19 14:17:32.395740] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.910 [2024-11-19 14:17:32.395767] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:33.910 [2024-11-19 14:17:32.395776] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.327 ms 00:16:33.910 [2024-11-19 14:17:32.395781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.910 [2024-11-19 14:17:32.404938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.910 [2024-11-19 14:17:32.405025] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:33.910 [2024-11-19 14:17:32.405039] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.115 ms 00:16:33.910 [2024-11-19 14:17:32.405045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.910 [2024-11-19 14:17:32.405308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.910 [2024-11-19 14:17:32.405318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:33.910 [2024-11-19 14:17:32.405327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:16:33.910 [2024-11-19 14:17:32.405332] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.910 [2024-11-19 14:17:32.451285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.910 [2024-11-19 14:17:32.451312] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:33.910 [2024-11-19 14:17:32.451324] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.935 ms 00:16:33.910 [2024-11-19 14:17:32.451331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.910 [2024-11-19 14:17:32.459272] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:34.171 [2024-11-19 14:17:32.470471] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.171 [2024-11-19 14:17:32.470500] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:34.171 [2024-11-19 14:17:32.470509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.086 ms 00:16:34.171 [2024-11-19 14:17:32.470516] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.171 [2024-11-19 14:17:32.470560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.171 [2024-11-19 14:17:32.470571] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:34.171 [2024-11-19 14:17:32.470578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:34.171 [2024-11-19 14:17:32.470590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.171 [2024-11-19 14:17:32.470624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.171 [2024-11-19 14:17:32.470632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:34.171 [2024-11-19 14:17:32.470639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:16:34.171 [2024-11-19 14:17:32.470645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.171 [2024-11-19 14:17:32.471585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.171 [2024-11-19 14:17:32.471611] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:34.171 [2024-11-19 14:17:32.471618] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.924 ms 00:16:34.171 [2024-11-19 14:17:32.471624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.171 [2024-11-19 14:17:32.471648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.171 [2024-11-19 14:17:32.471656] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:34.171 [2024-11-19 14:17:32.471662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:34.171 [2024-11-19 14:17:32.471670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.171 [2024-11-19 14:17:32.471696] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:34.171 [2024-11-19 14:17:32.471705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.171 [2024-11-19 14:17:32.471710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:34.171 [2024-11-19 14:17:32.471717] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:34.171 [2024-11-19 14:17:32.471723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.171 [2024-11-19 14:17:32.490229] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.171 [2024-11-19 14:17:32.490253] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:34.171 [2024-11-19 14:17:32.490263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.486 ms 00:16:34.171 [2024-11-19 14:17:32.490269] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.171 [2024-11-19 14:17:32.490335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.171 [2024-11-19 14:17:32.490343] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:34.171 [2024-11-19 14:17:32.490351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:16:34.171 [2024-11-19 14:17:32.490359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.171 [2024-11-19 14:17:32.490945] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:34.171 [2024-11-19 14:17:32.493344] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 203.730 ms, result 0 00:16:34.171 [2024-11-19 14:17:32.494783] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:34.171 Some configs were skipped because the RPC state that can call them passed over. 00:16:34.171 14:17:32 -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:34.171 [2024-11-19 14:17:32.716776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.171 [2024-11-19 14:17:32.716812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:16:34.171 [2024-11-19 14:17:32.716822] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.205 ms 00:16:34.171 [2024-11-19 14:17:32.716829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.172 [2024-11-19 14:17:32.716857] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 19.286 ms, result 0 00:16:34.172 true 00:16:34.433 14:17:32 -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:34.433 [2024-11-19 14:17:32.923620] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.433 [2024-11-19 14:17:32.923730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:16:34.433 [2024-11-19 14:17:32.923776] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.581 ms 00:16:34.433 [2024-11-19 14:17:32.923794] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.433 [2024-11-19 14:17:32.923835] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 18.794 ms, result 0 00:16:34.433 true 00:16:34.433 14:17:32 -- ftl/trim.sh@81 -- # killprocess 72438 00:16:34.433 14:17:32 -- common/autotest_common.sh@936 -- # '[' -z 72438 ']' 00:16:34.433 14:17:32 -- common/autotest_common.sh@940 -- # kill -0 72438 00:16:34.433 14:17:32 -- common/autotest_common.sh@941 -- # uname 00:16:34.433 14:17:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:34.433 14:17:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72438 00:16:34.433 14:17:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:34.433 14:17:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:34.433 killing process with pid 72438 00:16:34.433 14:17:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72438' 00:16:34.433 14:17:32 -- common/autotest_common.sh@955 -- # kill 72438 00:16:34.433 14:17:32 -- common/autotest_common.sh@960 -- # wait 72438 00:16:35.006 [2024-11-19 14:17:33.506148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.006 [2024-11-19 14:17:33.506338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:35.006 [2024-11-19 14:17:33.506390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:35.006 [2024-11-19 14:17:33.506410] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.006 [2024-11-19 14:17:33.506446] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:35.006 [2024-11-19 14:17:33.508464] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.006 [2024-11-19 14:17:33.508553] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:35.006 [2024-11-19 14:17:33.508603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.986 ms 00:16:35.006 [2024-11-19 14:17:33.508612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.006 [2024-11-19 14:17:33.508838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.006 [2024-11-19 14:17:33.508847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:35.006 [2024-11-19 14:17:33.508855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:16:35.006 [2024-11-19 14:17:33.508860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.006 [2024-11-19 14:17:33.512478] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.006 [2024-11-19 14:17:33.512570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:35.006 [2024-11-19 14:17:33.512586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.602 ms 00:16:35.006 [2024-11-19 14:17:33.512591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.006 [2024-11-19 14:17:33.517958] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.006 [2024-11-19 14:17:33.517987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:35.006 [2024-11-19 14:17:33.517996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.337 ms 00:16:35.006 [2024-11-19 14:17:33.518002] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.006 [2024-11-19 14:17:33.526375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.006 [2024-11-19 14:17:33.526398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:35.006 [2024-11-19 14:17:33.526409] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.327 ms 00:16:35.006 [2024-11-19 14:17:33.526414] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.006 [2024-11-19 14:17:33.533466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.006 [2024-11-19 14:17:33.533492] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:35.006 [2024-11-19 14:17:33.533501] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.021 ms 00:16:35.006 [2024-11-19 14:17:33.533508] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.006 [2024-11-19 14:17:33.533615] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.006 [2024-11-19 14:17:33.533622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:35.006 [2024-11-19 14:17:33.533629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:16:35.006 [2024-11-19 14:17:33.533635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.006 [2024-11-19 14:17:33.542243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.006 [2024-11-19 14:17:33.542267] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:35.006 [2024-11-19 14:17:33.542275] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.590 ms 00:16:35.006 [2024-11-19 14:17:33.542280] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.006 [2024-11-19 14:17:33.550386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.006 [2024-11-19 14:17:33.550408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:35.006 [2024-11-19 14:17:33.550420] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.075 ms 00:16:35.006 [2024-11-19 14:17:33.550425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.006 [2024-11-19 14:17:33.557978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.006 [2024-11-19 14:17:33.558000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:35.006 [2024-11-19 14:17:33.558008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.523 ms 00:16:35.006 [2024-11-19 14:17:33.558013] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.006 [2024-11-19 14:17:33.565690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.006 [2024-11-19 14:17:33.565713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:35.006 [2024-11-19 14:17:33.565721] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.618 ms 00:16:35.006 [2024-11-19 14:17:33.565726] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.006 [2024-11-19 14:17:33.565753] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:35.006 [2024-11-19 14:17:33.565764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:35.006 [2024-11-19 14:17:33.565775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:35.006 [2024-11-19 14:17:33.565780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:35.007 [2024-11-19 14:17:33.565788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:35.007 [2024-11-19 14:17:33.565793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:35.007 [2024-11-19 14:17:33.565802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:35.007 [2024-11-19 14:17:33.565807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:35.007 [2024-11-19 14:17:33.565814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:35.007 [2024-11-19 14:17:33.565820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:35.268 [2024-11-19 14:17:33.565827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:35.268 [2024-11-19 14:17:33.565832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:35.268 [2024-11-19 14:17:33.565839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:35.268 [2024-11-19 14:17:33.565845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:35.268 [2024-11-19 14:17:33.565853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:35.268 [2024-11-19 14:17:33.565859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:35.268 [2024-11-19 14:17:33.565865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:35.268 [2024-11-19 14:17:33.565872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:35.268 [2024-11-19 14:17:33.565893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:35.268 [2024-11-19 14:17:33.565899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:35.268 [2024-11-19 14:17:33.565906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.565912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.565921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.565927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.565934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.565939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.565946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.565953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.565960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.565966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.565974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.565979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.565986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.565992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:35.269 [2024-11-19 14:17:33.566431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:35.270 [2024-11-19 14:17:33.566438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:35.270 [2024-11-19 14:17:33.566449] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:35.270 [2024-11-19 14:17:33.566457] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3df0115d-4eed-4c52-9819-8d435bdfff0b 00:16:35.270 [2024-11-19 14:17:33.566465] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:35.270 [2024-11-19 14:17:33.566472] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:35.270 [2024-11-19 14:17:33.566477] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:35.270 [2024-11-19 14:17:33.566484] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:35.270 [2024-11-19 14:17:33.566489] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:35.270 [2024-11-19 14:17:33.566497] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:35.270 [2024-11-19 14:17:33.566502] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:35.270 [2024-11-19 14:17:33.566508] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:35.270 [2024-11-19 14:17:33.566513] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:35.270 [2024-11-19 14:17:33.566520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.270 [2024-11-19 14:17:33.566525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:35.270 [2024-11-19 14:17:33.566533] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.768 ms 00:16:35.270 [2024-11-19 14:17:33.566540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.270 [2024-11-19 14:17:33.576338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.270 [2024-11-19 14:17:33.576360] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:35.270 [2024-11-19 14:17:33.576370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.781 ms 00:16:35.270 [2024-11-19 14:17:33.576375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.270 [2024-11-19 14:17:33.576537] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.270 [2024-11-19 14:17:33.576545] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:35.270 [2024-11-19 14:17:33.576554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:16:35.270 [2024-11-19 14:17:33.576559] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.270 [2024-11-19 14:17:33.611678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.270 [2024-11-19 14:17:33.611778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:35.270 [2024-11-19 14:17:33.611792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.270 [2024-11-19 14:17:33.611798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.270 [2024-11-19 14:17:33.611857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.270 [2024-11-19 14:17:33.611865] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:35.270 [2024-11-19 14:17:33.611873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.270 [2024-11-19 14:17:33.611898] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.270 [2024-11-19 14:17:33.611932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.270 [2024-11-19 14:17:33.611940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:35.270 [2024-11-19 14:17:33.611948] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.270 [2024-11-19 14:17:33.611954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.270 [2024-11-19 14:17:33.611969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.270 [2024-11-19 14:17:33.611975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:35.270 [2024-11-19 14:17:33.611985] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.270 [2024-11-19 14:17:33.611992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.270 [2024-11-19 14:17:33.671965] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.270 [2024-11-19 14:17:33.671996] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:35.270 [2024-11-19 14:17:33.672007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.270 [2024-11-19 14:17:33.672013] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.270 [2024-11-19 14:17:33.694754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.270 [2024-11-19 14:17:33.694907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:35.270 [2024-11-19 14:17:33.694922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.270 [2024-11-19 14:17:33.694930] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.270 [2024-11-19 14:17:33.694973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.270 [2024-11-19 14:17:33.694981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:35.270 [2024-11-19 14:17:33.694989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.270 [2024-11-19 14:17:33.694995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.270 [2024-11-19 14:17:33.695021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.270 [2024-11-19 14:17:33.695027] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:35.270 [2024-11-19 14:17:33.695034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.270 [2024-11-19 14:17:33.695039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.270 [2024-11-19 14:17:33.695112] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.270 [2024-11-19 14:17:33.695120] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:35.270 [2024-11-19 14:17:33.695128] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.270 [2024-11-19 14:17:33.695133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.270 [2024-11-19 14:17:33.695162] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.270 [2024-11-19 14:17:33.695169] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:35.270 [2024-11-19 14:17:33.695176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.270 [2024-11-19 14:17:33.695181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.270 [2024-11-19 14:17:33.695211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.270 [2024-11-19 14:17:33.695218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:35.270 [2024-11-19 14:17:33.695226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.270 [2024-11-19 14:17:33.695232] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.270 [2024-11-19 14:17:33.695276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.270 [2024-11-19 14:17:33.695283] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:35.270 [2024-11-19 14:17:33.695291] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.270 [2024-11-19 14:17:33.695296] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.270 [2024-11-19 14:17:33.695400] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 189.238 ms, result 0 00:16:35.843 14:17:34 -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:35.843 14:17:34 -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:35.843 [2024-11-19 14:17:34.384052] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:35.843 [2024-11-19 14:17:34.384163] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72484 ] 00:16:36.104 [2024-11-19 14:17:34.533028] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.365 [2024-11-19 14:17:34.672374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.365 [2024-11-19 14:17:34.876659] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:36.365 [2024-11-19 14:17:34.876706] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:36.627 [2024-11-19 14:17:35.017453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.628 [2024-11-19 14:17:35.017486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:36.628 [2024-11-19 14:17:35.017496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:36.628 [2024-11-19 14:17:35.017502] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.628 [2024-11-19 14:17:35.019541] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.628 [2024-11-19 14:17:35.019733] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:36.628 [2024-11-19 14:17:35.019747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.027 ms 00:16:36.628 [2024-11-19 14:17:35.019753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.628 [2024-11-19 14:17:35.019805] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:36.628 [2024-11-19 14:17:35.020373] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:36.628 [2024-11-19 14:17:35.020390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.628 [2024-11-19 14:17:35.020396] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:36.628 [2024-11-19 14:17:35.020404] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.590 ms 00:16:36.628 [2024-11-19 14:17:35.020410] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.628 [2024-11-19 14:17:35.021350] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:36.628 [2024-11-19 14:17:35.030992] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.628 [2024-11-19 14:17:35.031018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:36.628 [2024-11-19 14:17:35.031026] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.643 ms 00:16:36.628 [2024-11-19 14:17:35.031032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.628 [2024-11-19 14:17:35.031097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.628 [2024-11-19 14:17:35.031106] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:36.628 [2024-11-19 14:17:35.031112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:36.628 [2024-11-19 14:17:35.031117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.628 [2024-11-19 14:17:35.035375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.628 [2024-11-19 14:17:35.035398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:36.628 [2024-11-19 14:17:35.035405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.226 ms 00:16:36.628 [2024-11-19 14:17:35.035414] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.628 [2024-11-19 14:17:35.035493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.628 [2024-11-19 14:17:35.035503] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:36.628 [2024-11-19 14:17:35.035509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:16:36.628 [2024-11-19 14:17:35.035515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.628 [2024-11-19 14:17:35.035534] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.628 [2024-11-19 14:17:35.035540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:36.628 [2024-11-19 14:17:35.035546] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:36.628 [2024-11-19 14:17:35.035552] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.628 [2024-11-19 14:17:35.035574] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:36.628 [2024-11-19 14:17:35.038314] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.628 [2024-11-19 14:17:35.038436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:36.628 [2024-11-19 14:17:35.038449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.750 ms 00:16:36.628 [2024-11-19 14:17:35.038458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.628 [2024-11-19 14:17:35.038489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.628 [2024-11-19 14:17:35.038495] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:36.628 [2024-11-19 14:17:35.038501] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:36.628 [2024-11-19 14:17:35.038507] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.628 [2024-11-19 14:17:35.038520] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:36.628 [2024-11-19 14:17:35.038534] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:36.628 [2024-11-19 14:17:35.038558] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:36.628 [2024-11-19 14:17:35.038571] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:36.628 [2024-11-19 14:17:35.038628] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:36.628 [2024-11-19 14:17:35.038636] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:36.628 [2024-11-19 14:17:35.038644] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:36.628 [2024-11-19 14:17:35.038651] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:36.628 [2024-11-19 14:17:35.038658] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:36.628 [2024-11-19 14:17:35.038664] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:36.628 [2024-11-19 14:17:35.038669] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:36.628 [2024-11-19 14:17:35.038675] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:36.628 [2024-11-19 14:17:35.038682] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:36.628 [2024-11-19 14:17:35.038689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.628 [2024-11-19 14:17:35.038694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:36.628 [2024-11-19 14:17:35.038700] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:16:36.628 [2024-11-19 14:17:35.038705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.628 [2024-11-19 14:17:35.038754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.628 [2024-11-19 14:17:35.038761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:36.628 [2024-11-19 14:17:35.038767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:36.628 [2024-11-19 14:17:35.038772] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.628 [2024-11-19 14:17:35.038829] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:36.628 [2024-11-19 14:17:35.038837] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:36.628 [2024-11-19 14:17:35.038843] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:36.628 [2024-11-19 14:17:35.038850] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:36.628 [2024-11-19 14:17:35.038855] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:36.628 [2024-11-19 14:17:35.038861] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:36.628 [2024-11-19 14:17:35.038866] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:36.628 [2024-11-19 14:17:35.038871] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:36.628 [2024-11-19 14:17:35.038892] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:36.628 [2024-11-19 14:17:35.038900] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:36.628 [2024-11-19 14:17:35.038906] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:36.628 [2024-11-19 14:17:35.038912] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:36.628 [2024-11-19 14:17:35.038917] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:36.628 [2024-11-19 14:17:35.038922] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:36.628 [2024-11-19 14:17:35.038932] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:36.628 [2024-11-19 14:17:35.038938] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:36.628 [2024-11-19 14:17:35.038943] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:36.628 [2024-11-19 14:17:35.038948] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:36.628 [2024-11-19 14:17:35.038954] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:36.628 [2024-11-19 14:17:35.038959] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:36.628 [2024-11-19 14:17:35.038964] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:36.628 [2024-11-19 14:17:35.038969] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:36.628 [2024-11-19 14:17:35.038974] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:36.628 [2024-11-19 14:17:35.038979] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:36.628 [2024-11-19 14:17:35.038984] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:36.628 [2024-11-19 14:17:35.038989] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:36.628 [2024-11-19 14:17:35.038994] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:36.628 [2024-11-19 14:17:35.038998] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:36.628 [2024-11-19 14:17:35.039004] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:36.628 [2024-11-19 14:17:35.039009] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:36.628 [2024-11-19 14:17:35.039014] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:36.628 [2024-11-19 14:17:35.039019] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:36.628 [2024-11-19 14:17:35.039023] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:36.628 [2024-11-19 14:17:35.039028] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:36.628 [2024-11-19 14:17:35.039033] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:36.628 [2024-11-19 14:17:35.039038] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:36.628 [2024-11-19 14:17:35.039043] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:36.628 [2024-11-19 14:17:35.039048] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:36.629 [2024-11-19 14:17:35.039053] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:36.629 [2024-11-19 14:17:35.039058] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:36.629 [2024-11-19 14:17:35.039062] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:36.629 [2024-11-19 14:17:35.039068] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:36.629 [2024-11-19 14:17:35.039074] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:36.629 [2024-11-19 14:17:35.039081] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:36.629 [2024-11-19 14:17:35.039087] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:36.629 [2024-11-19 14:17:35.039093] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:36.629 [2024-11-19 14:17:35.039097] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:36.629 [2024-11-19 14:17:35.039103] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:36.629 [2024-11-19 14:17:35.039107] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:36.629 [2024-11-19 14:17:35.039112] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:36.629 [2024-11-19 14:17:35.039118] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:36.629 [2024-11-19 14:17:35.039125] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:36.629 [2024-11-19 14:17:35.039131] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:36.629 [2024-11-19 14:17:35.039137] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:36.629 [2024-11-19 14:17:35.039142] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:36.629 [2024-11-19 14:17:35.039147] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:36.629 [2024-11-19 14:17:35.039153] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:36.629 [2024-11-19 14:17:35.039159] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:36.629 [2024-11-19 14:17:35.039164] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:36.629 [2024-11-19 14:17:35.039170] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:36.629 [2024-11-19 14:17:35.039176] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:36.629 [2024-11-19 14:17:35.039181] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:36.629 [2024-11-19 14:17:35.039186] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:36.629 [2024-11-19 14:17:35.039191] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:36.629 [2024-11-19 14:17:35.039197] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:36.629 [2024-11-19 14:17:35.039203] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:36.629 [2024-11-19 14:17:35.039212] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:36.629 [2024-11-19 14:17:35.039218] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:36.629 [2024-11-19 14:17:35.039223] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:36.629 [2024-11-19 14:17:35.039229] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:36.629 [2024-11-19 14:17:35.039242] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:36.629 [2024-11-19 14:17:35.039248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.629 [2024-11-19 14:17:35.039255] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:36.629 [2024-11-19 14:17:35.039262] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.453 ms 00:16:36.629 [2024-11-19 14:17:35.039267] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.629 [2024-11-19 14:17:35.051089] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.629 [2024-11-19 14:17:35.051114] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:36.629 [2024-11-19 14:17:35.051122] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.789 ms 00:16:36.629 [2024-11-19 14:17:35.051127] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.629 [2024-11-19 14:17:35.051214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.629 [2024-11-19 14:17:35.051221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:36.629 [2024-11-19 14:17:35.051227] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:16:36.629 [2024-11-19 14:17:35.051240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.629 [2024-11-19 14:17:35.093141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.629 [2024-11-19 14:17:35.093171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:36.629 [2024-11-19 14:17:35.093182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.884 ms 00:16:36.629 [2024-11-19 14:17:35.093188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.629 [2024-11-19 14:17:35.093244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.629 [2024-11-19 14:17:35.093252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:36.629 [2024-11-19 14:17:35.093262] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:36.629 [2024-11-19 14:17:35.093268] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.629 [2024-11-19 14:17:35.093539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.629 [2024-11-19 14:17:35.093552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:36.629 [2024-11-19 14:17:35.093560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:16:36.629 [2024-11-19 14:17:35.093566] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.629 [2024-11-19 14:17:35.093658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.629 [2024-11-19 14:17:35.093672] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:36.629 [2024-11-19 14:17:35.093679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:16:36.629 [2024-11-19 14:17:35.093685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.629 [2024-11-19 14:17:35.105029] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.629 [2024-11-19 14:17:35.105053] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:36.629 [2024-11-19 14:17:35.105060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.325 ms 00:16:36.629 [2024-11-19 14:17:35.105068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.629 [2024-11-19 14:17:35.114903] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:36.629 [2024-11-19 14:17:35.114928] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:36.629 [2024-11-19 14:17:35.114937] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.629 [2024-11-19 14:17:35.114943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:36.629 [2024-11-19 14:17:35.114949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.796 ms 00:16:36.629 [2024-11-19 14:17:35.114955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.629 [2024-11-19 14:17:35.133786] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.629 [2024-11-19 14:17:35.133814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:36.629 [2024-11-19 14:17:35.133823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.787 ms 00:16:36.629 [2024-11-19 14:17:35.133830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.629 [2024-11-19 14:17:35.142915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.629 [2024-11-19 14:17:35.142938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:36.629 [2024-11-19 14:17:35.142950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.014 ms 00:16:36.629 [2024-11-19 14:17:35.142955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.629 [2024-11-19 14:17:35.152042] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.629 [2024-11-19 14:17:35.152152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:36.629 [2024-11-19 14:17:35.152164] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.049 ms 00:16:36.629 [2024-11-19 14:17:35.152169] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.629 [2024-11-19 14:17:35.152433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.629 [2024-11-19 14:17:35.152442] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:36.629 [2024-11-19 14:17:35.152448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:16:36.629 [2024-11-19 14:17:35.152457] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.891 [2024-11-19 14:17:35.198336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.891 [2024-11-19 14:17:35.198365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:36.891 [2024-11-19 14:17:35.198374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.862 ms 00:16:36.891 [2024-11-19 14:17:35.198383] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.891 [2024-11-19 14:17:35.206184] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:36.891 [2024-11-19 14:17:35.217413] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.891 [2024-11-19 14:17:35.217439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:36.891 [2024-11-19 14:17:35.217449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.972 ms 00:16:36.891 [2024-11-19 14:17:35.217455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.891 [2024-11-19 14:17:35.217503] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.891 [2024-11-19 14:17:35.217510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:36.891 [2024-11-19 14:17:35.217519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:36.891 [2024-11-19 14:17:35.217525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.891 [2024-11-19 14:17:35.217561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.891 [2024-11-19 14:17:35.217567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:36.891 [2024-11-19 14:17:35.217573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:16:36.891 [2024-11-19 14:17:35.217578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.891 [2024-11-19 14:17:35.218509] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.891 [2024-11-19 14:17:35.218535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:36.891 [2024-11-19 14:17:35.218542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.913 ms 00:16:36.891 [2024-11-19 14:17:35.218547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.891 [2024-11-19 14:17:35.218571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.891 [2024-11-19 14:17:35.218580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:36.891 [2024-11-19 14:17:35.218585] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:36.891 [2024-11-19 14:17:35.218591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.891 [2024-11-19 14:17:35.218616] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:36.891 [2024-11-19 14:17:35.218623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.891 [2024-11-19 14:17:35.218629] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:36.891 [2024-11-19 14:17:35.218635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:36.891 [2024-11-19 14:17:35.218640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.891 [2024-11-19 14:17:35.236704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.891 [2024-11-19 14:17:35.236795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:36.891 [2024-11-19 14:17:35.236808] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.048 ms 00:16:36.891 [2024-11-19 14:17:35.236813] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.891 [2024-11-19 14:17:35.236887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.891 [2024-11-19 14:17:35.236896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:36.891 [2024-11-19 14:17:35.236902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:16:36.891 [2024-11-19 14:17:35.236908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.891 [2024-11-19 14:17:35.237505] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:36.891 [2024-11-19 14:17:35.239900] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 219.841 ms, result 0 00:16:36.891 [2024-11-19 14:17:35.240730] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:36.891 [2024-11-19 14:17:35.255716] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:37.836  [2024-11-19T14:17:37.344Z] Copying: 15/256 [MB] (15 MBps) [2024-11-19T14:17:38.284Z] Copying: 28/256 [MB] (13 MBps) [2024-11-19T14:17:39.673Z] Copying: 41/256 [MB] (12 MBps) [2024-11-19T14:17:40.616Z] Copying: 57/256 [MB] (16 MBps) [2024-11-19T14:17:41.563Z] Copying: 70/256 [MB] (12 MBps) [2024-11-19T14:17:42.575Z] Copying: 80/256 [MB] (10 MBps) [2024-11-19T14:17:43.520Z] Copying: 95/256 [MB] (14 MBps) [2024-11-19T14:17:44.467Z] Copying: 108/256 [MB] (12 MBps) [2024-11-19T14:17:45.409Z] Copying: 127/256 [MB] (19 MBps) [2024-11-19T14:17:46.356Z] Copying: 140/256 [MB] (13 MBps) [2024-11-19T14:17:47.301Z] Copying: 154/256 [MB] (13 MBps) [2024-11-19T14:17:48.688Z] Copying: 170/256 [MB] (15 MBps) [2024-11-19T14:17:49.628Z] Copying: 188/256 [MB] (17 MBps) [2024-11-19T14:17:50.570Z] Copying: 206/256 [MB] (18 MBps) [2024-11-19T14:17:51.514Z] Copying: 222/256 [MB] (15 MBps) [2024-11-19T14:17:52.460Z] Copying: 237/256 [MB] (14 MBps) [2024-11-19T14:17:52.460Z] Copying: 256/256 [MB] (average 15 MBps)[2024-11-19 14:17:52.139830] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:53.898 [2024-11-19 14:17:52.150312] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.898 [2024-11-19 14:17:52.150504] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:53.898 [2024-11-19 14:17:52.150526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:53.898 [2024-11-19 14:17:52.150535] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.898 [2024-11-19 14:17:52.150567] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:53.898 [2024-11-19 14:17:52.153522] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.898 [2024-11-19 14:17:52.153708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:53.898 [2024-11-19 14:17:52.153730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.940 ms 00:16:53.898 [2024-11-19 14:17:52.153738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.898 [2024-11-19 14:17:52.154036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.898 [2024-11-19 14:17:52.154049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:53.898 [2024-11-19 14:17:52.154060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:16:53.898 [2024-11-19 14:17:52.154074] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.898 [2024-11-19 14:17:52.157798] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.898 [2024-11-19 14:17:52.157827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:53.898 [2024-11-19 14:17:52.157837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.705 ms 00:16:53.898 [2024-11-19 14:17:52.157845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.898 [2024-11-19 14:17:52.164741] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.898 [2024-11-19 14:17:52.164935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:53.898 [2024-11-19 14:17:52.164956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.852 ms 00:16:53.898 [2024-11-19 14:17:52.164965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.898 [2024-11-19 14:17:52.190810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.898 [2024-11-19 14:17:52.190861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:53.898 [2024-11-19 14:17:52.190895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.764 ms 00:16:53.898 [2024-11-19 14:17:52.190904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.898 [2024-11-19 14:17:52.208240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.898 [2024-11-19 14:17:52.208432] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:53.898 [2024-11-19 14:17:52.208455] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.272 ms 00:16:53.898 [2024-11-19 14:17:52.208463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.898 [2024-11-19 14:17:52.208681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.898 [2024-11-19 14:17:52.208695] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:53.898 [2024-11-19 14:17:52.208704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:16:53.898 [2024-11-19 14:17:52.208713] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.898 [2024-11-19 14:17:52.234998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.898 [2024-11-19 14:17:52.235167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:53.898 [2024-11-19 14:17:52.235187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.266 ms 00:16:53.898 [2024-11-19 14:17:52.235194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.898 [2024-11-19 14:17:52.261145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.898 [2024-11-19 14:17:52.261191] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:53.898 [2024-11-19 14:17:52.261202] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.823 ms 00:16:53.898 [2024-11-19 14:17:52.261210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.898 [2024-11-19 14:17:52.286461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.898 [2024-11-19 14:17:52.286627] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:53.898 [2024-11-19 14:17:52.286644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.184 ms 00:16:53.898 [2024-11-19 14:17:52.286652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.898 [2024-11-19 14:17:52.311675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.898 [2024-11-19 14:17:52.311722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:53.898 [2024-11-19 14:17:52.311734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.924 ms 00:16:53.898 [2024-11-19 14:17:52.311741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.898 [2024-11-19 14:17:52.311803] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:53.898 [2024-11-19 14:17:52.311820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:53.898 [2024-11-19 14:17:52.311831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:53.898 [2024-11-19 14:17:52.311840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:53.898 [2024-11-19 14:17:52.311848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:53.898 [2024-11-19 14:17:52.311856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:53.898 [2024-11-19 14:17:52.311865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:53.898 [2024-11-19 14:17:52.311873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:53.898 [2024-11-19 14:17:52.311903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.311911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.311920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.311928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.311936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.311944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.311953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.311961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.311969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.311978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.311985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.311993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:53.899 [2024-11-19 14:17:52.312625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:53.900 [2024-11-19 14:17:52.312632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:53.900 [2024-11-19 14:17:52.312640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:53.900 [2024-11-19 14:17:52.312648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:53.900 [2024-11-19 14:17:52.312664] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:53.900 [2024-11-19 14:17:52.312673] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3df0115d-4eed-4c52-9819-8d435bdfff0b 00:16:53.900 [2024-11-19 14:17:52.312682] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:53.900 [2024-11-19 14:17:52.312690] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:53.900 [2024-11-19 14:17:52.312697] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:53.900 [2024-11-19 14:17:52.312706] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:53.900 [2024-11-19 14:17:52.312713] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:53.900 [2024-11-19 14:17:52.312724] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:53.900 [2024-11-19 14:17:52.312731] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:53.900 [2024-11-19 14:17:52.312739] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:53.900 [2024-11-19 14:17:52.312747] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:53.900 [2024-11-19 14:17:52.312755] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.900 [2024-11-19 14:17:52.312763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:53.900 [2024-11-19 14:17:52.312772] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.953 ms 00:16:53.900 [2024-11-19 14:17:52.312779] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.900 [2024-11-19 14:17:52.326384] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.900 [2024-11-19 14:17:52.326423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:53.900 [2024-11-19 14:17:52.326441] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.573 ms 00:16:53.900 [2024-11-19 14:17:52.326450] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.900 [2024-11-19 14:17:52.326690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.900 [2024-11-19 14:17:52.326701] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:53.900 [2024-11-19 14:17:52.326710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:16:53.900 [2024-11-19 14:17:52.326718] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.900 [2024-11-19 14:17:52.368228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.900 [2024-11-19 14:17:52.368285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:53.900 [2024-11-19 14:17:52.368302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.900 [2024-11-19 14:17:52.368310] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.900 [2024-11-19 14:17:52.368406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.900 [2024-11-19 14:17:52.368416] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:53.900 [2024-11-19 14:17:52.368424] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.900 [2024-11-19 14:17:52.368432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.900 [2024-11-19 14:17:52.368488] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.900 [2024-11-19 14:17:52.368499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:53.900 [2024-11-19 14:17:52.368507] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.900 [2024-11-19 14:17:52.368519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.900 [2024-11-19 14:17:52.368538] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.900 [2024-11-19 14:17:52.368546] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:53.900 [2024-11-19 14:17:52.368554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.900 [2024-11-19 14:17:52.368564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.900 [2024-11-19 14:17:52.449670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.900 [2024-11-19 14:17:52.449727] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:53.900 [2024-11-19 14:17:52.449746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.900 [2024-11-19 14:17:52.449754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.162 [2024-11-19 14:17:52.482348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.162 [2024-11-19 14:17:52.482394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:54.162 [2024-11-19 14:17:52.482407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.162 [2024-11-19 14:17:52.482415] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.162 [2024-11-19 14:17:52.482476] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.162 [2024-11-19 14:17:52.482486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:54.162 [2024-11-19 14:17:52.482495] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.162 [2024-11-19 14:17:52.482503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.162 [2024-11-19 14:17:52.482540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.162 [2024-11-19 14:17:52.482549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:54.162 [2024-11-19 14:17:52.482557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.162 [2024-11-19 14:17:52.482566] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.162 [2024-11-19 14:17:52.482670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.162 [2024-11-19 14:17:52.482683] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:54.162 [2024-11-19 14:17:52.482691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.162 [2024-11-19 14:17:52.482699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.162 [2024-11-19 14:17:52.482736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.162 [2024-11-19 14:17:52.482745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:54.162 [2024-11-19 14:17:52.482754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.162 [2024-11-19 14:17:52.482763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.162 [2024-11-19 14:17:52.482807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.162 [2024-11-19 14:17:52.482816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:54.162 [2024-11-19 14:17:52.482825] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.162 [2024-11-19 14:17:52.482833] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.162 [2024-11-19 14:17:52.482926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.162 [2024-11-19 14:17:52.482942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:54.162 [2024-11-19 14:17:52.482950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.162 [2024-11-19 14:17:52.482959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.162 [2024-11-19 14:17:52.483120] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 332.808 ms, result 0 00:16:54.735 00:16:54.735 00:16:54.735 14:17:53 -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:16:54.735 14:17:53 -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:55.307 14:17:53 -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:55.568 [2024-11-19 14:17:53.894871] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:55.569 [2024-11-19 14:17:53.895028] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72693 ] 00:16:55.569 [2024-11-19 14:17:54.046361] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.829 [2024-11-19 14:17:54.197781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.092 [2024-11-19 14:17:54.401844] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:56.092 [2024-11-19 14:17:54.401901] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:56.092 [2024-11-19 14:17:54.548730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.092 [2024-11-19 14:17:54.548766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:56.092 [2024-11-19 14:17:54.548776] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:56.092 [2024-11-19 14:17:54.548782] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.092 [2024-11-19 14:17:54.550812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.092 [2024-11-19 14:17:54.550842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:56.092 [2024-11-19 14:17:54.550850] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.015 ms 00:16:56.092 [2024-11-19 14:17:54.550856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.092 [2024-11-19 14:17:54.550966] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:56.092 [2024-11-19 14:17:54.551535] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:56.092 [2024-11-19 14:17:54.551553] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.092 [2024-11-19 14:17:54.551559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:56.092 [2024-11-19 14:17:54.551566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.595 ms 00:16:56.092 [2024-11-19 14:17:54.551571] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.092 [2024-11-19 14:17:54.552743] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:56.092 [2024-11-19 14:17:54.562386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.092 [2024-11-19 14:17:54.562412] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:56.092 [2024-11-19 14:17:54.562422] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.645 ms 00:16:56.092 [2024-11-19 14:17:54.562428] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.092 [2024-11-19 14:17:54.562493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.092 [2024-11-19 14:17:54.562502] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:56.092 [2024-11-19 14:17:54.562508] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:56.092 [2024-11-19 14:17:54.562513] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.092 [2024-11-19 14:17:54.566817] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.092 [2024-11-19 14:17:54.566840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:56.092 [2024-11-19 14:17:54.566847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.274 ms 00:16:56.092 [2024-11-19 14:17:54.566856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.092 [2024-11-19 14:17:54.566941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.092 [2024-11-19 14:17:54.566950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:56.092 [2024-11-19 14:17:54.566956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:16:56.092 [2024-11-19 14:17:54.566961] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.092 [2024-11-19 14:17:54.566981] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.092 [2024-11-19 14:17:54.566988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:56.092 [2024-11-19 14:17:54.566994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:56.092 [2024-11-19 14:17:54.566999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.092 [2024-11-19 14:17:54.567022] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:56.092 [2024-11-19 14:17:54.569799] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.092 [2024-11-19 14:17:54.569820] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:56.092 [2024-11-19 14:17:54.569828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.787 ms 00:16:56.092 [2024-11-19 14:17:54.569836] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.092 [2024-11-19 14:17:54.569864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.092 [2024-11-19 14:17:54.569870] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:56.092 [2024-11-19 14:17:54.569889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:56.092 [2024-11-19 14:17:54.569895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.092 [2024-11-19 14:17:54.569909] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:56.092 [2024-11-19 14:17:54.569923] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:56.092 [2024-11-19 14:17:54.569948] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:56.092 [2024-11-19 14:17:54.569961] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:56.092 [2024-11-19 14:17:54.570018] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:56.092 [2024-11-19 14:17:54.570026] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:56.092 [2024-11-19 14:17:54.570033] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:56.092 [2024-11-19 14:17:54.570040] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:56.092 [2024-11-19 14:17:54.570047] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:56.092 [2024-11-19 14:17:54.570053] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:56.092 [2024-11-19 14:17:54.570058] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:56.092 [2024-11-19 14:17:54.570064] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:56.092 [2024-11-19 14:17:54.570072] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:56.092 [2024-11-19 14:17:54.570078] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.092 [2024-11-19 14:17:54.570084] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:56.092 [2024-11-19 14:17:54.570089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:16:56.092 [2024-11-19 14:17:54.570094] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.092 [2024-11-19 14:17:54.570143] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.092 [2024-11-19 14:17:54.570150] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:56.092 [2024-11-19 14:17:54.570155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:56.092 [2024-11-19 14:17:54.570161] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.092 [2024-11-19 14:17:54.570215] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:56.092 [2024-11-19 14:17:54.570223] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:56.093 [2024-11-19 14:17:54.570230] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:56.093 [2024-11-19 14:17:54.570235] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.093 [2024-11-19 14:17:54.570241] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:56.093 [2024-11-19 14:17:54.570247] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:56.093 [2024-11-19 14:17:54.570252] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:56.093 [2024-11-19 14:17:54.570258] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:56.093 [2024-11-19 14:17:54.570264] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:56.093 [2024-11-19 14:17:54.570270] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:56.093 [2024-11-19 14:17:54.570276] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:56.093 [2024-11-19 14:17:54.570280] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:56.093 [2024-11-19 14:17:54.570285] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:56.093 [2024-11-19 14:17:54.570290] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:56.093 [2024-11-19 14:17:54.570300] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:56.093 [2024-11-19 14:17:54.570304] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.093 [2024-11-19 14:17:54.570309] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:56.093 [2024-11-19 14:17:54.570316] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:56.093 [2024-11-19 14:17:54.570320] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.093 [2024-11-19 14:17:54.570325] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:56.093 [2024-11-19 14:17:54.570330] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:56.093 [2024-11-19 14:17:54.570335] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:56.093 [2024-11-19 14:17:54.570340] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:56.093 [2024-11-19 14:17:54.570344] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:56.093 [2024-11-19 14:17:54.570349] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:56.093 [2024-11-19 14:17:54.570354] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:56.093 [2024-11-19 14:17:54.570359] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:56.093 [2024-11-19 14:17:54.570363] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:56.093 [2024-11-19 14:17:54.570368] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:56.093 [2024-11-19 14:17:54.570373] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:56.093 [2024-11-19 14:17:54.570378] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:56.093 [2024-11-19 14:17:54.570383] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:56.093 [2024-11-19 14:17:54.570388] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:56.093 [2024-11-19 14:17:54.570392] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:56.093 [2024-11-19 14:17:54.570397] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:56.093 [2024-11-19 14:17:54.570403] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:56.093 [2024-11-19 14:17:54.570408] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:56.093 [2024-11-19 14:17:54.570413] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:56.093 [2024-11-19 14:17:54.570417] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:56.093 [2024-11-19 14:17:54.570422] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:56.093 [2024-11-19 14:17:54.570426] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:56.093 [2024-11-19 14:17:54.570432] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:56.093 [2024-11-19 14:17:54.570438] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:56.093 [2024-11-19 14:17:54.570446] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.093 [2024-11-19 14:17:54.570452] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:56.093 [2024-11-19 14:17:54.570458] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:56.093 [2024-11-19 14:17:54.570463] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:56.093 [2024-11-19 14:17:54.570468] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:56.093 [2024-11-19 14:17:54.570473] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:56.093 [2024-11-19 14:17:54.570478] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:56.093 [2024-11-19 14:17:54.570483] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:56.093 [2024-11-19 14:17:54.570490] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:56.093 [2024-11-19 14:17:54.570496] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:56.093 [2024-11-19 14:17:54.570502] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:56.093 [2024-11-19 14:17:54.570508] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:56.093 [2024-11-19 14:17:54.570513] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:56.093 [2024-11-19 14:17:54.570519] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:56.093 [2024-11-19 14:17:54.570524] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:56.093 [2024-11-19 14:17:54.570529] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:56.093 [2024-11-19 14:17:54.570534] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:56.093 [2024-11-19 14:17:54.570539] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:56.093 [2024-11-19 14:17:54.570544] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:56.093 [2024-11-19 14:17:54.570549] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:56.093 [2024-11-19 14:17:54.570554] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:56.093 [2024-11-19 14:17:54.570560] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:56.093 [2024-11-19 14:17:54.570566] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:56.093 [2024-11-19 14:17:54.570574] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:56.093 [2024-11-19 14:17:54.570580] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:56.093 [2024-11-19 14:17:54.570586] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:56.093 [2024-11-19 14:17:54.570591] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:56.093 [2024-11-19 14:17:54.570597] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:56.093 [2024-11-19 14:17:54.570603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.093 [2024-11-19 14:17:54.570609] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:56.093 [2024-11-19 14:17:54.570614] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:16:56.093 [2024-11-19 14:17:54.570622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.093 [2024-11-19 14:17:54.582440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.093 [2024-11-19 14:17:54.582466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:56.093 [2024-11-19 14:17:54.582474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.787 ms 00:16:56.093 [2024-11-19 14:17:54.582479] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.093 [2024-11-19 14:17:54.582565] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.093 [2024-11-19 14:17:54.582572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:56.093 [2024-11-19 14:17:54.582578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:16:56.093 [2024-11-19 14:17:54.582583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.093 [2024-11-19 14:17:54.617712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.093 [2024-11-19 14:17:54.617742] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:56.093 [2024-11-19 14:17:54.617752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.112 ms 00:16:56.093 [2024-11-19 14:17:54.617759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.093 [2024-11-19 14:17:54.617813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.093 [2024-11-19 14:17:54.617822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:56.093 [2024-11-19 14:17:54.617832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:56.093 [2024-11-19 14:17:54.617839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.093 [2024-11-19 14:17:54.618130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.093 [2024-11-19 14:17:54.618144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:56.093 [2024-11-19 14:17:54.618152] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:16:56.093 [2024-11-19 14:17:54.618159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.093 [2024-11-19 14:17:54.618252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.093 [2024-11-19 14:17:54.618266] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:56.093 [2024-11-19 14:17:54.618273] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:16:56.093 [2024-11-19 14:17:54.618279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.093 [2024-11-19 14:17:54.629490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.094 [2024-11-19 14:17:54.629513] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:56.094 [2024-11-19 14:17:54.629520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.193 ms 00:16:56.094 [2024-11-19 14:17:54.629528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.094 [2024-11-19 14:17:54.639801] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:56.094 [2024-11-19 14:17:54.639925] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:56.094 [2024-11-19 14:17:54.639936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.094 [2024-11-19 14:17:54.639943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:56.094 [2024-11-19 14:17:54.639950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.333 ms 00:16:56.094 [2024-11-19 14:17:54.639956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.356 [2024-11-19 14:17:54.658730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.356 [2024-11-19 14:17:54.658768] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:56.356 [2024-11-19 14:17:54.658777] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.725 ms 00:16:56.356 [2024-11-19 14:17:54.658783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.356 [2024-11-19 14:17:54.668050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.356 [2024-11-19 14:17:54.668073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:56.356 [2024-11-19 14:17:54.668085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.215 ms 00:16:56.356 [2024-11-19 14:17:54.668091] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.356 [2024-11-19 14:17:54.677152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.356 [2024-11-19 14:17:54.677175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:56.356 [2024-11-19 14:17:54.677182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.023 ms 00:16:56.356 [2024-11-19 14:17:54.677188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.356 [2024-11-19 14:17:54.677456] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.356 [2024-11-19 14:17:54.677466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:56.356 [2024-11-19 14:17:54.677472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:16:56.356 [2024-11-19 14:17:54.677479] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.356 [2024-11-19 14:17:54.723227] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.356 [2024-11-19 14:17:54.723267] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:56.356 [2024-11-19 14:17:54.723278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.731 ms 00:16:56.356 [2024-11-19 14:17:54.723288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.356 [2024-11-19 14:17:54.731182] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:56.356 [2024-11-19 14:17:54.742462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.356 [2024-11-19 14:17:54.742488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:56.356 [2024-11-19 14:17:54.742496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.113 ms 00:16:56.356 [2024-11-19 14:17:54.742503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.356 [2024-11-19 14:17:54.742552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.356 [2024-11-19 14:17:54.742559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:56.356 [2024-11-19 14:17:54.742568] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:56.356 [2024-11-19 14:17:54.742574] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.356 [2024-11-19 14:17:54.742608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.356 [2024-11-19 14:17:54.742614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:56.356 [2024-11-19 14:17:54.742620] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:16:56.356 [2024-11-19 14:17:54.742626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.356 [2024-11-19 14:17:54.743555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.356 [2024-11-19 14:17:54.743583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:56.356 [2024-11-19 14:17:54.743590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.913 ms 00:16:56.356 [2024-11-19 14:17:54.743595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.356 [2024-11-19 14:17:54.743619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.357 [2024-11-19 14:17:54.743629] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:56.357 [2024-11-19 14:17:54.743635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:56.357 [2024-11-19 14:17:54.743641] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.357 [2024-11-19 14:17:54.743665] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:56.357 [2024-11-19 14:17:54.743672] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.357 [2024-11-19 14:17:54.743678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:56.357 [2024-11-19 14:17:54.743684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:56.357 [2024-11-19 14:17:54.743690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.357 [2024-11-19 14:17:54.762171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.357 [2024-11-19 14:17:54.762204] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:56.357 [2024-11-19 14:17:54.762213] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.465 ms 00:16:56.357 [2024-11-19 14:17:54.762219] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.357 [2024-11-19 14:17:54.762283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.357 [2024-11-19 14:17:54.762291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:56.357 [2024-11-19 14:17:54.762297] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:16:56.357 [2024-11-19 14:17:54.762303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.357 [2024-11-19 14:17:54.762899] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:56.357 [2024-11-19 14:17:54.765294] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 213.936 ms, result 0 00:16:56.357 [2024-11-19 14:17:54.766637] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:56.357 [2024-11-19 14:17:54.777613] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:56.620  [2024-11-19T14:17:55.182Z] Copying: 4096/4096 [kB] (average 15 MBps)[2024-11-19 14:17:55.045308] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:56.620 [2024-11-19 14:17:55.051683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.620 [2024-11-19 14:17:55.051712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:56.620 [2024-11-19 14:17:55.051720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:56.620 [2024-11-19 14:17:55.051726] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.620 [2024-11-19 14:17:55.051742] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:56.620 [2024-11-19 14:17:55.053708] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.620 [2024-11-19 14:17:55.053729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:56.620 [2024-11-19 14:17:55.053737] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.958 ms 00:16:56.620 [2024-11-19 14:17:55.053743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.620 [2024-11-19 14:17:55.056372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.620 [2024-11-19 14:17:55.056396] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:56.620 [2024-11-19 14:17:55.056403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.613 ms 00:16:56.620 [2024-11-19 14:17:55.056413] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.620 [2024-11-19 14:17:55.059662] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.620 [2024-11-19 14:17:55.059766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:56.620 [2024-11-19 14:17:55.059778] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.238 ms 00:16:56.620 [2024-11-19 14:17:55.059784] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.620 [2024-11-19 14:17:55.064981] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.620 [2024-11-19 14:17:55.065003] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:56.620 [2024-11-19 14:17:55.065011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.165 ms 00:16:56.620 [2024-11-19 14:17:55.065021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.620 [2024-11-19 14:17:55.082517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.620 [2024-11-19 14:17:55.082622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:56.620 [2024-11-19 14:17:55.082634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.453 ms 00:16:56.620 [2024-11-19 14:17:55.082640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.620 [2024-11-19 14:17:55.094744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.620 [2024-11-19 14:17:55.094768] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:56.620 [2024-11-19 14:17:55.094776] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.073 ms 00:16:56.620 [2024-11-19 14:17:55.094783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.620 [2024-11-19 14:17:55.094896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.620 [2024-11-19 14:17:55.094904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:56.620 [2024-11-19 14:17:55.094911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:16:56.620 [2024-11-19 14:17:55.094916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.620 [2024-11-19 14:17:55.113206] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.620 [2024-11-19 14:17:55.113303] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:56.620 [2024-11-19 14:17:55.113315] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.277 ms 00:16:56.620 [2024-11-19 14:17:55.113320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.620 [2024-11-19 14:17:55.131059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.620 [2024-11-19 14:17:55.131081] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:56.620 [2024-11-19 14:17:55.131089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.699 ms 00:16:56.620 [2024-11-19 14:17:55.131095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.620 [2024-11-19 14:17:55.148059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.620 [2024-11-19 14:17:55.148156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:56.620 [2024-11-19 14:17:55.148167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.931 ms 00:16:56.620 [2024-11-19 14:17:55.148172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.620 [2024-11-19 14:17:55.165510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.620 [2024-11-19 14:17:55.165533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:56.620 [2024-11-19 14:17:55.165540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.288 ms 00:16:56.620 [2024-11-19 14:17:55.165545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.620 [2024-11-19 14:17:55.165577] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:56.620 [2024-11-19 14:17:55.165588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:56.620 [2024-11-19 14:17:55.165595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:56.620 [2024-11-19 14:17:55.165601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:56.620 [2024-11-19 14:17:55.165607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:56.620 [2024-11-19 14:17:55.165613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:56.620 [2024-11-19 14:17:55.165619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:56.620 [2024-11-19 14:17:55.165624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:56.620 [2024-11-19 14:17:55.165630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:56.620 [2024-11-19 14:17:55.165635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:56.620 [2024-11-19 14:17:55.165641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.165996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.166001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.166006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.166012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.166017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.166023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.166028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.166033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.166040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.166045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.166051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.166056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.166062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.166068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.166073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.166084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.166090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.166095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.166101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.166107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.166112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.166117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.166123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.166128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.166134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.166140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.166145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.166156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.166162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:56.621 [2024-11-19 14:17:55.166167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:56.622 [2024-11-19 14:17:55.166173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:56.622 [2024-11-19 14:17:55.166184] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:56.622 [2024-11-19 14:17:55.166190] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3df0115d-4eed-4c52-9819-8d435bdfff0b 00:16:56.622 [2024-11-19 14:17:55.166196] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:56.622 [2024-11-19 14:17:55.166203] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:56.622 [2024-11-19 14:17:55.166209] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:56.622 [2024-11-19 14:17:55.166215] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:56.622 [2024-11-19 14:17:55.166222] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:56.622 [2024-11-19 14:17:55.166228] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:56.622 [2024-11-19 14:17:55.166233] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:56.622 [2024-11-19 14:17:55.166238] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:56.622 [2024-11-19 14:17:55.166242] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:56.622 [2024-11-19 14:17:55.166247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.622 [2024-11-19 14:17:55.166253] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:56.622 [2024-11-19 14:17:55.166259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.671 ms 00:16:56.622 [2024-11-19 14:17:55.166265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.622 [2024-11-19 14:17:55.175183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.622 [2024-11-19 14:17:55.175275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:56.622 [2024-11-19 14:17:55.175289] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.905 ms 00:16:56.622 [2024-11-19 14:17:55.175295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.622 [2024-11-19 14:17:55.175455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.622 [2024-11-19 14:17:55.175462] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:56.622 [2024-11-19 14:17:55.175468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:16:56.622 [2024-11-19 14:17:55.175473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.883 [2024-11-19 14:17:55.205050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.883 [2024-11-19 14:17:55.205075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:56.883 [2024-11-19 14:17:55.205086] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.883 [2024-11-19 14:17:55.205092] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.883 [2024-11-19 14:17:55.205149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.883 [2024-11-19 14:17:55.205156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:56.883 [2024-11-19 14:17:55.205161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.883 [2024-11-19 14:17:55.205167] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.883 [2024-11-19 14:17:55.205196] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.883 [2024-11-19 14:17:55.205203] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:56.883 [2024-11-19 14:17:55.205208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.883 [2024-11-19 14:17:55.205216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.883 [2024-11-19 14:17:55.205229] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.883 [2024-11-19 14:17:55.205235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:56.883 [2024-11-19 14:17:55.205242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.883 [2024-11-19 14:17:55.205247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.883 [2024-11-19 14:17:55.262375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.883 [2024-11-19 14:17:55.262527] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:56.883 [2024-11-19 14:17:55.262545] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.883 [2024-11-19 14:17:55.262551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.883 [2024-11-19 14:17:55.285100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.883 [2024-11-19 14:17:55.285207] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:56.883 [2024-11-19 14:17:55.285219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.883 [2024-11-19 14:17:55.285225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.883 [2024-11-19 14:17:55.285266] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.883 [2024-11-19 14:17:55.285273] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:56.883 [2024-11-19 14:17:55.285279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.883 [2024-11-19 14:17:55.285284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.883 [2024-11-19 14:17:55.285309] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.884 [2024-11-19 14:17:55.285316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:56.884 [2024-11-19 14:17:55.285321] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.884 [2024-11-19 14:17:55.285327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.884 [2024-11-19 14:17:55.285399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.884 [2024-11-19 14:17:55.285407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:56.884 [2024-11-19 14:17:55.285414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.884 [2024-11-19 14:17:55.285419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.884 [2024-11-19 14:17:55.285442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.884 [2024-11-19 14:17:55.285449] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:56.884 [2024-11-19 14:17:55.285455] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.884 [2024-11-19 14:17:55.285461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.884 [2024-11-19 14:17:55.285488] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.884 [2024-11-19 14:17:55.285495] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:56.884 [2024-11-19 14:17:55.285501] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.884 [2024-11-19 14:17:55.285506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.884 [2024-11-19 14:17:55.285539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.884 [2024-11-19 14:17:55.285549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:56.884 [2024-11-19 14:17:55.285555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.884 [2024-11-19 14:17:55.285560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.884 [2024-11-19 14:17:55.285663] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 233.977 ms, result 0 00:16:57.454 00:16:57.454 00:16:57.454 14:17:55 -- ftl/trim.sh@93 -- # svcpid=72718 00:16:57.454 14:17:55 -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:57.454 14:17:55 -- ftl/trim.sh@94 -- # waitforlisten 72718 00:16:57.454 14:17:55 -- common/autotest_common.sh@829 -- # '[' -z 72718 ']' 00:16:57.454 14:17:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.454 14:17:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:57.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.454 14:17:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.454 14:17:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:57.454 14:17:55 -- common/autotest_common.sh@10 -- # set +x 00:16:57.714 [2024-11-19 14:17:56.016770] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:57.715 [2024-11-19 14:17:56.017473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72718 ] 00:16:57.715 [2024-11-19 14:17:56.167824] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.976 [2024-11-19 14:17:56.305723] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:57.976 [2024-11-19 14:17:56.305889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.548 14:17:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:58.548 14:17:56 -- common/autotest_common.sh@862 -- # return 0 00:16:58.548 14:17:56 -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:58.548 [2024-11-19 14:17:57.016114] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:58.548 [2024-11-19 14:17:57.016265] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:58.811 [2024-11-19 14:17:57.177329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.811 [2024-11-19 14:17:57.177364] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:58.811 [2024-11-19 14:17:57.177376] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:58.811 [2024-11-19 14:17:57.177382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.811 [2024-11-19 14:17:57.179416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.811 [2024-11-19 14:17:57.179538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:58.811 [2024-11-19 14:17:57.179554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.020 ms 00:16:58.811 [2024-11-19 14:17:57.179560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.811 [2024-11-19 14:17:57.179617] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:58.811 [2024-11-19 14:17:57.180172] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:58.811 [2024-11-19 14:17:57.180190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.811 [2024-11-19 14:17:57.180197] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:58.811 [2024-11-19 14:17:57.180205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.581 ms 00:16:58.811 [2024-11-19 14:17:57.180211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.811 [2024-11-19 14:17:57.181171] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:58.811 [2024-11-19 14:17:57.190813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.811 [2024-11-19 14:17:57.190935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:58.811 [2024-11-19 14:17:57.190950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.647 ms 00:16:58.811 [2024-11-19 14:17:57.190957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.811 [2024-11-19 14:17:57.191007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.811 [2024-11-19 14:17:57.191016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:58.811 [2024-11-19 14:17:57.191023] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:58.811 [2024-11-19 14:17:57.191029] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.811 [2024-11-19 14:17:57.195356] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.811 [2024-11-19 14:17:57.195443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:58.811 [2024-11-19 14:17:57.195516] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.290 ms 00:16:58.811 [2024-11-19 14:17:57.195536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.811 [2024-11-19 14:17:57.195618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.811 [2024-11-19 14:17:57.195735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:58.811 [2024-11-19 14:17:57.195792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:16:58.812 [2024-11-19 14:17:57.195808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.812 [2024-11-19 14:17:57.195836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.812 [2024-11-19 14:17:57.195853] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:58.812 [2024-11-19 14:17:57.195869] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:58.812 [2024-11-19 14:17:57.195895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.812 [2024-11-19 14:17:57.195927] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:58.812 [2024-11-19 14:17:57.198743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.812 [2024-11-19 14:17:57.198822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:58.812 [2024-11-19 14:17:57.198892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.822 ms 00:16:58.812 [2024-11-19 14:17:57.198911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.812 [2024-11-19 14:17:57.198952] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.812 [2024-11-19 14:17:57.199009] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:58.812 [2024-11-19 14:17:57.199058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:58.812 [2024-11-19 14:17:57.199074] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.812 [2024-11-19 14:17:57.199100] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:58.812 [2024-11-19 14:17:57.199123] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:58.812 [2024-11-19 14:17:57.199166] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:58.812 [2024-11-19 14:17:57.199233] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:58.812 [2024-11-19 14:17:57.199316] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:58.812 [2024-11-19 14:17:57.199370] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:58.812 [2024-11-19 14:17:57.199401] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:58.812 [2024-11-19 14:17:57.199419] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:58.812 [2024-11-19 14:17:57.199428] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:58.812 [2024-11-19 14:17:57.199434] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:58.812 [2024-11-19 14:17:57.199441] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:58.812 [2024-11-19 14:17:57.199447] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:58.812 [2024-11-19 14:17:57.199454] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:58.812 [2024-11-19 14:17:57.199460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.812 [2024-11-19 14:17:57.199467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:58.812 [2024-11-19 14:17:57.199473] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:16:58.812 [2024-11-19 14:17:57.199480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.812 [2024-11-19 14:17:57.199532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.812 [2024-11-19 14:17:57.199540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:58.812 [2024-11-19 14:17:57.199546] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:58.812 [2024-11-19 14:17:57.199552] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.812 [2024-11-19 14:17:57.199620] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:58.812 [2024-11-19 14:17:57.199629] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:58.812 [2024-11-19 14:17:57.199635] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:58.812 [2024-11-19 14:17:57.199642] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.812 [2024-11-19 14:17:57.199648] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:58.812 [2024-11-19 14:17:57.199654] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:58.812 [2024-11-19 14:17:57.199659] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:58.812 [2024-11-19 14:17:57.199669] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:58.812 [2024-11-19 14:17:57.199674] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:58.812 [2024-11-19 14:17:57.199680] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:58.812 [2024-11-19 14:17:57.199684] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:58.812 [2024-11-19 14:17:57.199691] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:58.812 [2024-11-19 14:17:57.199696] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:58.812 [2024-11-19 14:17:57.199702] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:58.812 [2024-11-19 14:17:57.199707] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:58.812 [2024-11-19 14:17:57.199713] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.812 [2024-11-19 14:17:57.199718] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:58.812 [2024-11-19 14:17:57.199724] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:58.812 [2024-11-19 14:17:57.199729] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.812 [2024-11-19 14:17:57.199735] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:58.812 [2024-11-19 14:17:57.199740] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:58.812 [2024-11-19 14:17:57.199748] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:58.812 [2024-11-19 14:17:57.199753] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:58.812 [2024-11-19 14:17:57.199761] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:58.812 [2024-11-19 14:17:57.199766] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:58.812 [2024-11-19 14:17:57.199776] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:58.812 [2024-11-19 14:17:57.199781] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:58.812 [2024-11-19 14:17:57.199787] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:58.812 [2024-11-19 14:17:57.199792] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:58.812 [2024-11-19 14:17:57.199798] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:58.812 [2024-11-19 14:17:57.199802] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:58.812 [2024-11-19 14:17:57.199809] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:58.812 [2024-11-19 14:17:57.199814] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:58.812 [2024-11-19 14:17:57.199820] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:58.812 [2024-11-19 14:17:57.199825] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:58.812 [2024-11-19 14:17:57.199831] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:58.812 [2024-11-19 14:17:57.199835] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:58.812 [2024-11-19 14:17:57.199841] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:58.812 [2024-11-19 14:17:57.199846] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:58.812 [2024-11-19 14:17:57.199853] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:58.812 [2024-11-19 14:17:57.199858] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:58.812 [2024-11-19 14:17:57.199866] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:58.812 [2024-11-19 14:17:57.199871] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:58.812 [2024-11-19 14:17:57.199894] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.812 [2024-11-19 14:17:57.199900] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:58.812 [2024-11-19 14:17:57.199906] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:58.812 [2024-11-19 14:17:57.199911] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:58.812 [2024-11-19 14:17:57.199917] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:58.812 [2024-11-19 14:17:57.199922] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:58.812 [2024-11-19 14:17:57.199927] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:58.812 [2024-11-19 14:17:57.199934] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:58.812 [2024-11-19 14:17:57.199942] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:58.812 [2024-11-19 14:17:57.199949] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:58.812 [2024-11-19 14:17:57.199956] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:58.812 [2024-11-19 14:17:57.199961] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:58.812 [2024-11-19 14:17:57.199970] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:58.812 [2024-11-19 14:17:57.199977] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:58.812 [2024-11-19 14:17:57.199983] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:58.812 [2024-11-19 14:17:57.199989] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:58.812 [2024-11-19 14:17:57.199995] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:58.812 [2024-11-19 14:17:57.200001] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:58.812 [2024-11-19 14:17:57.200008] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:58.812 [2024-11-19 14:17:57.200013] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:58.813 [2024-11-19 14:17:57.200019] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:58.813 [2024-11-19 14:17:57.200025] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:58.813 [2024-11-19 14:17:57.200032] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:58.813 [2024-11-19 14:17:57.200038] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:58.813 [2024-11-19 14:17:57.200045] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:58.813 [2024-11-19 14:17:57.200051] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:58.813 [2024-11-19 14:17:57.200057] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:58.813 [2024-11-19 14:17:57.200062] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:58.813 [2024-11-19 14:17:57.200071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.813 [2024-11-19 14:17:57.200076] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:58.813 [2024-11-19 14:17:57.200083] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:16:58.813 [2024-11-19 14:17:57.200088] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.813 [2024-11-19 14:17:57.211929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.813 [2024-11-19 14:17:57.212011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:58.813 [2024-11-19 14:17:57.212078] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.799 ms 00:16:58.813 [2024-11-19 14:17:57.212098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.813 [2024-11-19 14:17:57.212195] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.813 [2024-11-19 14:17:57.212220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:58.813 [2024-11-19 14:17:57.212264] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:16:58.813 [2024-11-19 14:17:57.212281] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.813 [2024-11-19 14:17:57.236258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.813 [2024-11-19 14:17:57.236347] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:58.813 [2024-11-19 14:17:57.236388] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.951 ms 00:16:58.813 [2024-11-19 14:17:57.236406] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.813 [2024-11-19 14:17:57.236459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.813 [2024-11-19 14:17:57.236479] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:58.813 [2024-11-19 14:17:57.236539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:58.813 [2024-11-19 14:17:57.236557] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.813 [2024-11-19 14:17:57.236848] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.813 [2024-11-19 14:17:57.236946] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:58.813 [2024-11-19 14:17:57.236991] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:16:58.813 [2024-11-19 14:17:57.237008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.813 [2024-11-19 14:17:57.237108] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.813 [2024-11-19 14:17:57.237129] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:58.813 [2024-11-19 14:17:57.237170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:16:58.813 [2024-11-19 14:17:57.237187] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.813 [2024-11-19 14:17:57.249055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.813 [2024-11-19 14:17:57.249137] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:58.813 [2024-11-19 14:17:57.249181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.842 ms 00:16:58.813 [2024-11-19 14:17:57.249199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.813 [2024-11-19 14:17:57.259002] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:16:58.813 [2024-11-19 14:17:57.259108] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:58.813 [2024-11-19 14:17:57.259156] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.813 [2024-11-19 14:17:57.259186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:58.813 [2024-11-19 14:17:57.259206] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.853 ms 00:16:58.813 [2024-11-19 14:17:57.259259] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.813 [2024-11-19 14:17:57.277965] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.813 [2024-11-19 14:17:57.278048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:58.813 [2024-11-19 14:17:57.278088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.646 ms 00:16:58.813 [2024-11-19 14:17:57.278105] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.813 [2024-11-19 14:17:57.287019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.813 [2024-11-19 14:17:57.287103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:58.813 [2024-11-19 14:17:57.287142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.811 ms 00:16:58.813 [2024-11-19 14:17:57.287158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.813 [2024-11-19 14:17:57.295994] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.813 [2024-11-19 14:17:57.296075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:58.813 [2024-11-19 14:17:57.296172] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.791 ms 00:16:58.813 [2024-11-19 14:17:57.296189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.813 [2024-11-19 14:17:57.296483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.813 [2024-11-19 14:17:57.296546] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:58.813 [2024-11-19 14:17:57.296595] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:16:58.813 [2024-11-19 14:17:57.296612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.813 [2024-11-19 14:17:57.342038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.813 [2024-11-19 14:17:57.342149] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:58.813 [2024-11-19 14:17:57.342194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.396 ms 00:16:58.813 [2024-11-19 14:17:57.342211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.813 [2024-11-19 14:17:57.350200] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:58.813 [2024-11-19 14:17:57.361483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.813 [2024-11-19 14:17:57.361585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:58.813 [2024-11-19 14:17:57.361623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.213 ms 00:16:58.813 [2024-11-19 14:17:57.361642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.813 [2024-11-19 14:17:57.361700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.813 [2024-11-19 14:17:57.361722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:58.813 [2024-11-19 14:17:57.361737] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:58.813 [2024-11-19 14:17:57.361756] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.813 [2024-11-19 14:17:57.361801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.813 [2024-11-19 14:17:57.361819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:58.813 [2024-11-19 14:17:57.361835] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:16:58.813 [2024-11-19 14:17:57.361890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.813 [2024-11-19 14:17:57.362817] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.813 [2024-11-19 14:17:57.362916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:58.813 [2024-11-19 14:17:57.362960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.870 ms 00:16:58.813 [2024-11-19 14:17:57.362978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.813 [2024-11-19 14:17:57.363013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.813 [2024-11-19 14:17:57.363121] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:58.813 [2024-11-19 14:17:57.363162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:58.813 [2024-11-19 14:17:57.363179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.813 [2024-11-19 14:17:57.363215] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:58.813 [2024-11-19 14:17:57.363234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.813 [2024-11-19 14:17:57.363255] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:58.813 [2024-11-19 14:17:57.363272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:16:58.813 [2024-11-19 14:17:57.363286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.075 [2024-11-19 14:17:57.381556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.075 [2024-11-19 14:17:57.381640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:59.075 [2024-11-19 14:17:57.381681] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.240 ms 00:16:59.075 [2024-11-19 14:17:57.381698] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.075 [2024-11-19 14:17:57.381769] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.075 [2024-11-19 14:17:57.381788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:59.075 [2024-11-19 14:17:57.381806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:16:59.075 [2024-11-19 14:17:57.381822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.075 [2024-11-19 14:17:57.382504] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:59.075 [2024-11-19 14:17:57.385014] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 204.971 ms, result 0 00:16:59.075 [2024-11-19 14:17:57.386032] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:59.075 Some configs were skipped because the RPC state that can call them passed over. 00:16:59.075 14:17:57 -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:59.075 [2024-11-19 14:17:57.624050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.075 [2024-11-19 14:17:57.624145] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:16:59.075 [2024-11-19 14:17:57.624184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.122 ms 00:16:59.075 [2024-11-19 14:17:57.624203] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.075 [2024-11-19 14:17:57.624242] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 19.312 ms, result 0 00:16:59.075 true 00:16:59.336 14:17:57 -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:59.336 [2024-11-19 14:17:57.830061] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.336 [2024-11-19 14:17:57.830157] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:16:59.336 [2024-11-19 14:17:57.830198] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.747 ms 00:16:59.336 [2024-11-19 14:17:57.830216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.336 [2024-11-19 14:17:57.830256] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 17.940 ms, result 0 00:16:59.336 true 00:16:59.336 14:17:57 -- ftl/trim.sh@102 -- # killprocess 72718 00:16:59.336 14:17:57 -- common/autotest_common.sh@936 -- # '[' -z 72718 ']' 00:16:59.336 14:17:57 -- common/autotest_common.sh@940 -- # kill -0 72718 00:16:59.336 14:17:57 -- common/autotest_common.sh@941 -- # uname 00:16:59.336 14:17:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:59.336 14:17:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72718 00:16:59.336 killing process with pid 72718 00:16:59.336 14:17:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:59.336 14:17:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:59.336 14:17:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72718' 00:16:59.336 14:17:57 -- common/autotest_common.sh@955 -- # kill 72718 00:16:59.336 14:17:57 -- common/autotest_common.sh@960 -- # wait 72718 00:16:59.910 [2024-11-19 14:17:58.399151] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.910 [2024-11-19 14:17:58.399199] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:59.910 [2024-11-19 14:17:58.399209] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:59.910 [2024-11-19 14:17:58.399217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.910 [2024-11-19 14:17:58.399237] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:59.910 [2024-11-19 14:17:58.401400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.911 [2024-11-19 14:17:58.401432] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:59.911 [2024-11-19 14:17:58.401443] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.137 ms 00:16:59.911 [2024-11-19 14:17:58.401449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.911 [2024-11-19 14:17:58.401693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.911 [2024-11-19 14:17:58.401701] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:59.911 [2024-11-19 14:17:58.401709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:16:59.911 [2024-11-19 14:17:58.401714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.911 [2024-11-19 14:17:58.404885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.911 [2024-11-19 14:17:58.404906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:59.911 [2024-11-19 14:17:58.404916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.155 ms 00:16:59.911 [2024-11-19 14:17:58.404922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.911 [2024-11-19 14:17:58.410316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.911 [2024-11-19 14:17:58.410467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:59.911 [2024-11-19 14:17:58.410481] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.367 ms 00:16:59.911 [2024-11-19 14:17:58.410487] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.911 [2024-11-19 14:17:58.418182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.911 [2024-11-19 14:17:58.418266] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:59.911 [2024-11-19 14:17:58.418314] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.637 ms 00:16:59.911 [2024-11-19 14:17:58.418331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.911 [2024-11-19 14:17:58.424872] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.911 [2024-11-19 14:17:58.425043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:59.911 [2024-11-19 14:17:58.425090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.503 ms 00:16:59.911 [2024-11-19 14:17:58.425107] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.911 [2024-11-19 14:17:58.425222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.911 [2024-11-19 14:17:58.425281] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:59.911 [2024-11-19 14:17:58.425302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:16:59.911 [2024-11-19 14:17:58.425317] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.911 [2024-11-19 14:17:58.433255] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.911 [2024-11-19 14:17:58.433337] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:59.911 [2024-11-19 14:17:58.433381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.887 ms 00:16:59.911 [2024-11-19 14:17:58.433398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.911 [2024-11-19 14:17:58.440955] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.911 [2024-11-19 14:17:58.441035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:59.911 [2024-11-19 14:17:58.441079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.521 ms 00:16:59.911 [2024-11-19 14:17:58.441096] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.911 [2024-11-19 14:17:58.448932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.911 [2024-11-19 14:17:58.449013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:59.911 [2024-11-19 14:17:58.449052] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.792 ms 00:16:59.911 [2024-11-19 14:17:58.449068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.911 [2024-11-19 14:17:58.456864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.911 [2024-11-19 14:17:58.456953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:59.911 [2024-11-19 14:17:58.456993] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.738 ms 00:16:59.911 [2024-11-19 14:17:58.457009] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.911 [2024-11-19 14:17:58.457043] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:59.911 [2024-11-19 14:17:58.457064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:59.911 [2024-11-19 14:17:58.457092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:59.911 [2024-11-19 14:17:58.457114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:59.911 [2024-11-19 14:17:58.457137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:59.911 [2024-11-19 14:17:58.457181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:59.911 [2024-11-19 14:17:58.457208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:59.911 [2024-11-19 14:17:58.457257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:59.911 [2024-11-19 14:17:58.457283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:59.911 [2024-11-19 14:17:58.457305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:59.911 [2024-11-19 14:17:58.457519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:59.911 [2024-11-19 14:17:58.457543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:59.911 [2024-11-19 14:17:58.457565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:59.911 [2024-11-19 14:17:58.457587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:59.911 [2024-11-19 14:17:58.457612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:59.911 [2024-11-19 14:17:58.457662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:59.911 [2024-11-19 14:17:58.457688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:59.911 [2024-11-19 14:17:58.457710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:59.911 [2024-11-19 14:17:58.457733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:59.911 [2024-11-19 14:17:58.457755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:59.911 [2024-11-19 14:17:58.457778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:59.911 [2024-11-19 14:17:58.457820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:59.911 [2024-11-19 14:17:58.457846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:59.911 [2024-11-19 14:17:58.457902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:59.911 [2024-11-19 14:17:58.457927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:59.911 [2024-11-19 14:17:58.457965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:59.911 [2024-11-19 14:17:58.457990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:59.911 [2024-11-19 14:17:58.458012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:59.911 [2024-11-19 14:17:58.458035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:59.911 [2024-11-19 14:17:58.458093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:59.911 [2024-11-19 14:17:58.458118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:59.911 [2024-11-19 14:17:58.458139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:59.911 [2024-11-19 14:17:58.458162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:59.911 [2024-11-19 14:17:58.458208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:59.911 [2024-11-19 14:17:58.458232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:59.911 [2024-11-19 14:17:58.458281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.458307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.458329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.458375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.458397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.458423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.458444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.458592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.458615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.458638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.458678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.458702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.458724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.458800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.458825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.458847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.458869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.458940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.458963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.458988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.459009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.459054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.459136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.459177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.459202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.459255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.459279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.459302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.459356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.459380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.459403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.459426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.459496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.459522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.459544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.459569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.459635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.459659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.459682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.459726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.459748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.459772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.459815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.459839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.459862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.459957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.459983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.460006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.460028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.460088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.460111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.460136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.460158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.460202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.460225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.460276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.460299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.460349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.460374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.460398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.460421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.460470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.460497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.460520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.460542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.460578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:59.912 [2024-11-19 14:17:58.460591] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:59.912 [2024-11-19 14:17:58.460599] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3df0115d-4eed-4c52-9819-8d435bdfff0b 00:16:59.912 [2024-11-19 14:17:58.460605] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:59.912 [2024-11-19 14:17:58.460613] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:59.912 [2024-11-19 14:17:58.460619] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:59.913 [2024-11-19 14:17:58.460627] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:59.913 [2024-11-19 14:17:58.460633] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:59.913 [2024-11-19 14:17:58.460640] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:59.913 [2024-11-19 14:17:58.460646] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:59.913 [2024-11-19 14:17:58.460652] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:59.913 [2024-11-19 14:17:58.460657] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:59.913 [2024-11-19 14:17:58.460663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.913 [2024-11-19 14:17:58.460669] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:59.913 [2024-11-19 14:17:58.460677] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.622 ms 00:16:59.913 [2024-11-19 14:17:58.460684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.913 [2024-11-19 14:17:58.470185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.913 [2024-11-19 14:17:58.470260] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:00.175 [2024-11-19 14:17:58.470299] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.473 ms 00:17:00.175 [2024-11-19 14:17:58.470316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.175 [2024-11-19 14:17:58.470492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.175 [2024-11-19 14:17:58.470849] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:00.175 [2024-11-19 14:17:58.471106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:17:00.175 [2024-11-19 14:17:58.471179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.175 [2024-11-19 14:17:58.522350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.175 [2024-11-19 14:17:58.522474] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:00.175 [2024-11-19 14:17:58.522530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.175 [2024-11-19 14:17:58.522553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.175 [2024-11-19 14:17:58.522660] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.175 [2024-11-19 14:17:58.522686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:00.175 [2024-11-19 14:17:58.522709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.175 [2024-11-19 14:17:58.522785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.175 [2024-11-19 14:17:58.522844] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.175 [2024-11-19 14:17:58.523190] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:00.175 [2024-11-19 14:17:58.523256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.175 [2024-11-19 14:17:58.523375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.175 [2024-11-19 14:17:58.523426] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.175 [2024-11-19 14:17:58.523449] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:00.175 [2024-11-19 14:17:58.523499] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.175 [2024-11-19 14:17:58.523525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.175 [2024-11-19 14:17:58.600360] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.175 [2024-11-19 14:17:58.600502] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:00.175 [2024-11-19 14:17:58.600559] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.175 [2024-11-19 14:17:58.600582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.175 [2024-11-19 14:17:58.629812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.175 [2024-11-19 14:17:58.629957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:00.175 [2024-11-19 14:17:58.630012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.175 [2024-11-19 14:17:58.630037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.175 [2024-11-19 14:17:58.630108] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.175 [2024-11-19 14:17:58.630132] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:00.175 [2024-11-19 14:17:58.630155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.175 [2024-11-19 14:17:58.630174] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.175 [2024-11-19 14:17:58.630217] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.176 [2024-11-19 14:17:58.630237] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:00.176 [2024-11-19 14:17:58.630258] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.176 [2024-11-19 14:17:58.630306] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.176 [2024-11-19 14:17:58.630466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.176 [2024-11-19 14:17:58.630495] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:00.176 [2024-11-19 14:17:58.630542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.176 [2024-11-19 14:17:58.630564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.176 [2024-11-19 14:17:58.630614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.176 [2024-11-19 14:17:58.630638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:00.176 [2024-11-19 14:17:58.630659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.176 [2024-11-19 14:17:58.630677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.176 [2024-11-19 14:17:58.630731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.176 [2024-11-19 14:17:58.630753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:00.176 [2024-11-19 14:17:58.630775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.176 [2024-11-19 14:17:58.630795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.176 [2024-11-19 14:17:58.630896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.176 [2024-11-19 14:17:58.630923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:00.176 [2024-11-19 14:17:58.630944] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.176 [2024-11-19 14:17:58.630963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.176 [2024-11-19 14:17:58.631159] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 231.978 ms, result 0 00:17:01.125 14:17:59 -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:01.125 [2024-11-19 14:17:59.604975] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:01.125 [2024-11-19 14:17:59.605130] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72771 ] 00:17:01.385 [2024-11-19 14:17:59.761238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.646 [2024-11-19 14:17:59.980457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.908 [2024-11-19 14:18:00.270930] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:01.908 [2024-11-19 14:18:00.271285] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:01.908 [2024-11-19 14:18:00.427860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.908 [2024-11-19 14:18:00.427933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:01.908 [2024-11-19 14:18:00.427949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:01.908 [2024-11-19 14:18:00.427957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.908 [2024-11-19 14:18:00.430838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.908 [2024-11-19 14:18:00.430907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:01.908 [2024-11-19 14:18:00.430919] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.860 ms 00:17:01.908 [2024-11-19 14:18:00.430928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.908 [2024-11-19 14:18:00.431044] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:01.908 [2024-11-19 14:18:00.432233] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:01.908 [2024-11-19 14:18:00.432292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.908 [2024-11-19 14:18:00.432303] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:01.908 [2024-11-19 14:18:00.432314] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.258 ms 00:17:01.908 [2024-11-19 14:18:00.432323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.908 [2024-11-19 14:18:00.434163] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:01.908 [2024-11-19 14:18:00.448722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.908 [2024-11-19 14:18:00.448768] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:01.908 [2024-11-19 14:18:00.448781] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.560 ms 00:17:01.908 [2024-11-19 14:18:00.448789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.908 [2024-11-19 14:18:00.448925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.908 [2024-11-19 14:18:00.448939] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:01.908 [2024-11-19 14:18:00.448948] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:01.908 [2024-11-19 14:18:00.448956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.908 [2024-11-19 14:18:00.457097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.908 [2024-11-19 14:18:00.457133] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:01.908 [2024-11-19 14:18:00.457144] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.092 ms 00:17:01.908 [2024-11-19 14:18:00.457158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.909 [2024-11-19 14:18:00.457276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.909 [2024-11-19 14:18:00.457286] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:01.909 [2024-11-19 14:18:00.457295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:17:01.909 [2024-11-19 14:18:00.457303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.909 [2024-11-19 14:18:00.457331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.909 [2024-11-19 14:18:00.457339] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:01.909 [2024-11-19 14:18:00.457347] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:01.909 [2024-11-19 14:18:00.457355] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.909 [2024-11-19 14:18:00.457387] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:01.909 [2024-11-19 14:18:00.461599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.909 [2024-11-19 14:18:00.461631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:01.909 [2024-11-19 14:18:00.461641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.228 ms 00:17:01.909 [2024-11-19 14:18:00.461652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.909 [2024-11-19 14:18:00.461728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.909 [2024-11-19 14:18:00.461738] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:01.909 [2024-11-19 14:18:00.461747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:01.909 [2024-11-19 14:18:00.461755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.909 [2024-11-19 14:18:00.461774] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:01.909 [2024-11-19 14:18:00.461794] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:01.909 [2024-11-19 14:18:00.461828] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:01.909 [2024-11-19 14:18:00.461847] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:01.909 [2024-11-19 14:18:00.461942] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:01.909 [2024-11-19 14:18:00.461954] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:01.909 [2024-11-19 14:18:00.461965] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:01.909 [2024-11-19 14:18:00.461975] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:01.909 [2024-11-19 14:18:00.461985] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:01.909 [2024-11-19 14:18:00.461993] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:01.909 [2024-11-19 14:18:00.462001] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:01.909 [2024-11-19 14:18:00.462008] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:01.909 [2024-11-19 14:18:00.462019] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:01.909 [2024-11-19 14:18:00.462027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.909 [2024-11-19 14:18:00.462035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:01.909 [2024-11-19 14:18:00.462042] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:17:01.909 [2024-11-19 14:18:00.462049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.909 [2024-11-19 14:18:00.462116] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.909 [2024-11-19 14:18:00.462126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:01.909 [2024-11-19 14:18:00.462134] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:01.909 [2024-11-19 14:18:00.462140] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.909 [2024-11-19 14:18:00.462218] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:01.909 [2024-11-19 14:18:00.462228] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:01.909 [2024-11-19 14:18:00.462237] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:01.909 [2024-11-19 14:18:00.462245] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.909 [2024-11-19 14:18:00.462254] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:01.909 [2024-11-19 14:18:00.462261] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:01.909 [2024-11-19 14:18:00.462268] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:01.909 [2024-11-19 14:18:00.462275] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:01.909 [2024-11-19 14:18:00.462283] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:01.909 [2024-11-19 14:18:00.462289] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:01.909 [2024-11-19 14:18:00.462297] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:01.909 [2024-11-19 14:18:00.462304] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:01.909 [2024-11-19 14:18:00.462314] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:01.909 [2024-11-19 14:18:00.462321] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:01.909 [2024-11-19 14:18:00.462336] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:01.909 [2024-11-19 14:18:00.462343] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.909 [2024-11-19 14:18:00.462350] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:01.909 [2024-11-19 14:18:00.462357] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:01.909 [2024-11-19 14:18:00.462363] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.909 [2024-11-19 14:18:00.462370] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:01.909 [2024-11-19 14:18:00.462376] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:01.909 [2024-11-19 14:18:00.462383] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:01.909 [2024-11-19 14:18:00.462389] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:01.909 [2024-11-19 14:18:00.462396] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:01.909 [2024-11-19 14:18:00.462404] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:01.909 [2024-11-19 14:18:00.462410] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:01.909 [2024-11-19 14:18:00.462416] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:01.909 [2024-11-19 14:18:00.462422] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:01.909 [2024-11-19 14:18:00.462429] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:01.909 [2024-11-19 14:18:00.462435] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:01.909 [2024-11-19 14:18:00.462442] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:01.909 [2024-11-19 14:18:00.462449] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:01.909 [2024-11-19 14:18:00.462457] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:01.909 [2024-11-19 14:18:00.462464] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:01.909 [2024-11-19 14:18:00.462470] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:01.909 [2024-11-19 14:18:00.462477] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:01.909 [2024-11-19 14:18:00.462483] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:01.909 [2024-11-19 14:18:00.462489] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:01.909 [2024-11-19 14:18:00.462496] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:01.909 [2024-11-19 14:18:00.462501] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:01.909 [2024-11-19 14:18:00.462507] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:01.909 [2024-11-19 14:18:00.462516] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:01.909 [2024-11-19 14:18:00.462523] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:01.909 [2024-11-19 14:18:00.462534] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.909 [2024-11-19 14:18:00.462543] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:01.909 [2024-11-19 14:18:00.462551] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:01.909 [2024-11-19 14:18:00.462557] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:01.909 [2024-11-19 14:18:00.462564] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:01.909 [2024-11-19 14:18:00.462571] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:01.909 [2024-11-19 14:18:00.462578] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:01.909 [2024-11-19 14:18:00.462586] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:01.909 [2024-11-19 14:18:00.462596] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:01.909 [2024-11-19 14:18:00.462604] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:01.909 [2024-11-19 14:18:00.462612] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:01.909 [2024-11-19 14:18:00.462619] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:01.909 [2024-11-19 14:18:00.462626] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:01.909 [2024-11-19 14:18:00.462633] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:01.909 [2024-11-19 14:18:00.462641] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:01.909 [2024-11-19 14:18:00.462648] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:01.909 [2024-11-19 14:18:00.462655] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:01.909 [2024-11-19 14:18:00.462662] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:01.909 [2024-11-19 14:18:00.462669] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:01.909 [2024-11-19 14:18:00.462675] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:01.910 [2024-11-19 14:18:00.462683] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:01.910 [2024-11-19 14:18:00.462691] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:01.910 [2024-11-19 14:18:00.462698] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:01.910 [2024-11-19 14:18:00.462711] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:01.910 [2024-11-19 14:18:00.462718] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:01.910 [2024-11-19 14:18:00.462726] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:01.910 [2024-11-19 14:18:00.462733] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:01.910 [2024-11-19 14:18:00.462742] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:01.910 [2024-11-19 14:18:00.462750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.910 [2024-11-19 14:18:00.462758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:01.910 [2024-11-19 14:18:00.462765] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.576 ms 00:17:01.910 [2024-11-19 14:18:00.462772] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.172 [2024-11-19 14:18:00.481125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.172 [2024-11-19 14:18:00.481168] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:02.172 [2024-11-19 14:18:00.481180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.308 ms 00:17:02.172 [2024-11-19 14:18:00.481188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.172 [2024-11-19 14:18:00.481317] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.172 [2024-11-19 14:18:00.481328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:02.172 [2024-11-19 14:18:00.481338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:17:02.172 [2024-11-19 14:18:00.481346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.172 [2024-11-19 14:18:00.527002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.172 [2024-11-19 14:18:00.527049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:02.172 [2024-11-19 14:18:00.527062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.632 ms 00:17:02.172 [2024-11-19 14:18:00.527070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.172 [2024-11-19 14:18:00.527156] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.172 [2024-11-19 14:18:00.527167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:02.172 [2024-11-19 14:18:00.527181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:02.172 [2024-11-19 14:18:00.527189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.172 [2024-11-19 14:18:00.527780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.172 [2024-11-19 14:18:00.527815] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:02.172 [2024-11-19 14:18:00.527825] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:17:02.172 [2024-11-19 14:18:00.527833] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.172 [2024-11-19 14:18:00.528007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.172 [2024-11-19 14:18:00.528020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:02.172 [2024-11-19 14:18:00.528028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:17:02.172 [2024-11-19 14:18:00.528037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.172 [2024-11-19 14:18:00.545245] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.172 [2024-11-19 14:18:00.545284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:02.172 [2024-11-19 14:18:00.545294] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.181 ms 00:17:02.172 [2024-11-19 14:18:00.545305] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.172 [2024-11-19 14:18:00.559701] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:02.172 [2024-11-19 14:18:00.559741] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:02.172 [2024-11-19 14:18:00.559752] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.172 [2024-11-19 14:18:00.559761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:02.172 [2024-11-19 14:18:00.559771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.330 ms 00:17:02.172 [2024-11-19 14:18:00.559778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.172 [2024-11-19 14:18:00.586250] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.172 [2024-11-19 14:18:00.586315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:02.172 [2024-11-19 14:18:00.586327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.375 ms 00:17:02.172 [2024-11-19 14:18:00.586335] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.172 [2024-11-19 14:18:00.599734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.172 [2024-11-19 14:18:00.599775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:02.172 [2024-11-19 14:18:00.599797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.304 ms 00:17:02.172 [2024-11-19 14:18:00.599805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.172 [2024-11-19 14:18:00.612751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.172 [2024-11-19 14:18:00.612789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:02.172 [2024-11-19 14:18:00.612800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.844 ms 00:17:02.172 [2024-11-19 14:18:00.612808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.172 [2024-11-19 14:18:00.613234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.172 [2024-11-19 14:18:00.613249] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:02.172 [2024-11-19 14:18:00.613259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:17:02.172 [2024-11-19 14:18:00.613269] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.172 [2024-11-19 14:18:00.681530] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.172 [2024-11-19 14:18:00.681578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:02.172 [2024-11-19 14:18:00.681593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.237 ms 00:17:02.172 [2024-11-19 14:18:00.681608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.172 [2024-11-19 14:18:00.692766] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:02.172 [2024-11-19 14:18:00.711769] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.172 [2024-11-19 14:18:00.711816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:02.172 [2024-11-19 14:18:00.711829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.059 ms 00:17:02.172 [2024-11-19 14:18:00.711838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.172 [2024-11-19 14:18:00.712055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.172 [2024-11-19 14:18:00.712070] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:02.172 [2024-11-19 14:18:00.712085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:02.172 [2024-11-19 14:18:00.712093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.172 [2024-11-19 14:18:00.712154] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.172 [2024-11-19 14:18:00.712163] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:02.172 [2024-11-19 14:18:00.712171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:02.172 [2024-11-19 14:18:00.712178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.172 [2024-11-19 14:18:00.713561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.172 [2024-11-19 14:18:00.713601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:02.172 [2024-11-19 14:18:00.713611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.362 ms 00:17:02.172 [2024-11-19 14:18:00.713620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.172 [2024-11-19 14:18:00.713659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.172 [2024-11-19 14:18:00.713671] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:02.172 [2024-11-19 14:18:00.713680] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:02.172 [2024-11-19 14:18:00.713689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.172 [2024-11-19 14:18:00.713727] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:02.172 [2024-11-19 14:18:00.713737] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.172 [2024-11-19 14:18:00.713745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:02.172 [2024-11-19 14:18:00.713753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:02.172 [2024-11-19 14:18:00.713761] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.434 [2024-11-19 14:18:00.740190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.434 [2024-11-19 14:18:00.740235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:02.434 [2024-11-19 14:18:00.740247] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.402 ms 00:17:02.434 [2024-11-19 14:18:00.740256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.434 [2024-11-19 14:18:00.740379] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.434 [2024-11-19 14:18:00.740391] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:02.434 [2024-11-19 14:18:00.740402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:02.434 [2024-11-19 14:18:00.740410] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.434 [2024-11-19 14:18:00.741461] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:02.434 [2024-11-19 14:18:00.745074] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 313.278 ms, result 0 00:17:02.434 [2024-11-19 14:18:00.746512] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:02.434 [2024-11-19 14:18:00.760424] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:03.379  [2024-11-19T14:18:02.886Z] Copying: 16/256 [MB] (16 MBps) [2024-11-19T14:18:03.838Z] Copying: 27/256 [MB] (11 MBps) [2024-11-19T14:18:05.226Z] Copying: 48/256 [MB] (20 MBps) [2024-11-19T14:18:06.168Z] Copying: 61/256 [MB] (13 MBps) [2024-11-19T14:18:07.113Z] Copying: 74/256 [MB] (12 MBps) [2024-11-19T14:18:08.057Z] Copying: 85/256 [MB] (10 MBps) [2024-11-19T14:18:09.084Z] Copying: 102/256 [MB] (17 MBps) [2024-11-19T14:18:10.025Z] Copying: 115/256 [MB] (13 MBps) [2024-11-19T14:18:10.967Z] Copying: 125/256 [MB] (10 MBps) [2024-11-19T14:18:11.911Z] Copying: 141/256 [MB] (15 MBps) [2024-11-19T14:18:12.855Z] Copying: 163/256 [MB] (21 MBps) [2024-11-19T14:18:14.240Z] Copying: 182/256 [MB] (19 MBps) [2024-11-19T14:18:15.180Z] Copying: 202/256 [MB] (19 MBps) [2024-11-19T14:18:16.124Z] Copying: 221/256 [MB] (19 MBps) [2024-11-19T14:18:17.065Z] Copying: 238/256 [MB] (16 MBps) [2024-11-19T14:18:17.065Z] Copying: 256/256 [MB] (average 16 MBps)[2024-11-19 14:18:17.053180] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:18.503 [2024-11-19 14:18:17.063540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.503 [2024-11-19 14:18:17.063615] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:18.503 [2024-11-19 14:18:17.063630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:18.503 [2024-11-19 14:18:17.063639] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.503 [2024-11-19 14:18:17.063670] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:18.765 [2024-11-19 14:18:17.067002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.765 [2024-11-19 14:18:17.067046] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:18.765 [2024-11-19 14:18:17.067058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.315 ms 00:17:18.765 [2024-11-19 14:18:17.067067] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.765 [2024-11-19 14:18:17.067382] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.765 [2024-11-19 14:18:17.067395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:18.765 [2024-11-19 14:18:17.067406] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:17:18.765 [2024-11-19 14:18:17.067419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.765 [2024-11-19 14:18:17.071129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.765 [2024-11-19 14:18:17.071153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:18.765 [2024-11-19 14:18:17.071163] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.693 ms 00:17:18.765 [2024-11-19 14:18:17.071171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.765 [2024-11-19 14:18:17.078777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.765 [2024-11-19 14:18:17.078821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:18.765 [2024-11-19 14:18:17.078832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.570 ms 00:17:18.765 [2024-11-19 14:18:17.078841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.765 [2024-11-19 14:18:17.104914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.765 [2024-11-19 14:18:17.104962] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:18.765 [2024-11-19 14:18:17.104974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.975 ms 00:17:18.765 [2024-11-19 14:18:17.104982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.765 [2024-11-19 14:18:17.121377] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.765 [2024-11-19 14:18:17.121436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:18.765 [2024-11-19 14:18:17.121449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.314 ms 00:17:18.765 [2024-11-19 14:18:17.121458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.765 [2024-11-19 14:18:17.121632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.765 [2024-11-19 14:18:17.121646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:18.765 [2024-11-19 14:18:17.121655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:17:18.765 [2024-11-19 14:18:17.121663] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.765 [2024-11-19 14:18:17.147405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.765 [2024-11-19 14:18:17.147452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:18.765 [2024-11-19 14:18:17.147463] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.724 ms 00:17:18.765 [2024-11-19 14:18:17.147470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.765 [2024-11-19 14:18:17.173177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.765 [2024-11-19 14:18:17.173221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:18.765 [2024-11-19 14:18:17.173232] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.632 ms 00:17:18.765 [2024-11-19 14:18:17.173240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.765 [2024-11-19 14:18:17.198532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.765 [2024-11-19 14:18:17.198576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:18.765 [2024-11-19 14:18:17.198587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.217 ms 00:17:18.765 [2024-11-19 14:18:17.198595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.765 [2024-11-19 14:18:17.223769] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.765 [2024-11-19 14:18:17.223813] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:18.765 [2024-11-19 14:18:17.223825] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.079 ms 00:17:18.765 [2024-11-19 14:18:17.223832] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.765 [2024-11-19 14:18:17.223903] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:18.765 [2024-11-19 14:18:17.223920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:18.765 [2024-11-19 14:18:17.223931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:18.765 [2024-11-19 14:18:17.223939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:18.765 [2024-11-19 14:18:17.223947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:18.765 [2024-11-19 14:18:17.223954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:18.765 [2024-11-19 14:18:17.223962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:18.765 [2024-11-19 14:18:17.223971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:18.765 [2024-11-19 14:18:17.223978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:18.765 [2024-11-19 14:18:17.223986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:18.765 [2024-11-19 14:18:17.223994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:18.765 [2024-11-19 14:18:17.224003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:18.765 [2024-11-19 14:18:17.224010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:18.766 [2024-11-19 14:18:17.224668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:18.767 [2024-11-19 14:18:17.224676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:18.767 [2024-11-19 14:18:17.224683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:18.767 [2024-11-19 14:18:17.224699] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:18.767 [2024-11-19 14:18:17.224708] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3df0115d-4eed-4c52-9819-8d435bdfff0b 00:17:18.767 [2024-11-19 14:18:17.224716] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:18.767 [2024-11-19 14:18:17.224724] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:18.767 [2024-11-19 14:18:17.224731] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:18.767 [2024-11-19 14:18:17.224739] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:18.767 [2024-11-19 14:18:17.224746] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:18.767 [2024-11-19 14:18:17.224758] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:18.767 [2024-11-19 14:18:17.224765] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:18.767 [2024-11-19 14:18:17.224772] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:18.767 [2024-11-19 14:18:17.224780] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:18.767 [2024-11-19 14:18:17.224788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.767 [2024-11-19 14:18:17.224796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:18.767 [2024-11-19 14:18:17.224805] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.886 ms 00:17:18.767 [2024-11-19 14:18:17.224812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.767 [2024-11-19 14:18:17.238027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.767 [2024-11-19 14:18:17.238067] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:18.767 [2024-11-19 14:18:17.238086] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.181 ms 00:17:18.767 [2024-11-19 14:18:17.238094] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.767 [2024-11-19 14:18:17.238328] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.767 [2024-11-19 14:18:17.238340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:18.767 [2024-11-19 14:18:17.238348] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:17:18.767 [2024-11-19 14:18:17.238355] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.767 [2024-11-19 14:18:17.279840] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.767 [2024-11-19 14:18:17.279901] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:18.767 [2024-11-19 14:18:17.279919] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.767 [2024-11-19 14:18:17.279927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.767 [2024-11-19 14:18:17.280021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.767 [2024-11-19 14:18:17.280031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:18.767 [2024-11-19 14:18:17.280040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.767 [2024-11-19 14:18:17.280047] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.767 [2024-11-19 14:18:17.280103] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.767 [2024-11-19 14:18:17.280113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:18.767 [2024-11-19 14:18:17.280121] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.767 [2024-11-19 14:18:17.280133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.767 [2024-11-19 14:18:17.280152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.767 [2024-11-19 14:18:17.280161] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:18.767 [2024-11-19 14:18:17.280169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.767 [2024-11-19 14:18:17.280176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.027 [2024-11-19 14:18:17.359676] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.027 [2024-11-19 14:18:17.359724] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:19.027 [2024-11-19 14:18:17.359740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.027 [2024-11-19 14:18:17.359747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.027 [2024-11-19 14:18:17.391313] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.027 [2024-11-19 14:18:17.391358] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:19.027 [2024-11-19 14:18:17.391371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.027 [2024-11-19 14:18:17.391380] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.027 [2024-11-19 14:18:17.391439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.027 [2024-11-19 14:18:17.391449] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:19.027 [2024-11-19 14:18:17.391458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.027 [2024-11-19 14:18:17.391466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.027 [2024-11-19 14:18:17.391504] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.027 [2024-11-19 14:18:17.391513] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:19.027 [2024-11-19 14:18:17.391522] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.027 [2024-11-19 14:18:17.391530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.027 [2024-11-19 14:18:17.391635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.027 [2024-11-19 14:18:17.391647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:19.027 [2024-11-19 14:18:17.391655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.027 [2024-11-19 14:18:17.391663] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.027 [2024-11-19 14:18:17.391701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.027 [2024-11-19 14:18:17.391711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:19.027 [2024-11-19 14:18:17.391719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.027 [2024-11-19 14:18:17.391727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.027 [2024-11-19 14:18:17.391770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.027 [2024-11-19 14:18:17.391780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:19.027 [2024-11-19 14:18:17.391788] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.027 [2024-11-19 14:18:17.391797] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.027 [2024-11-19 14:18:17.391847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.027 [2024-11-19 14:18:17.391861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:19.027 [2024-11-19 14:18:17.391869] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.027 [2024-11-19 14:18:17.391902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.027 [2024-11-19 14:18:17.392067] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 328.538 ms, result 0 00:17:19.968 00:17:19.968 00:17:19.968 14:18:18 -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:20.539 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:20.539 14:18:18 -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:20.539 14:18:18 -- ftl/trim.sh@109 -- # fio_kill 00:17:20.539 14:18:18 -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:20.539 14:18:18 -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:20.539 14:18:18 -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:20.539 14:18:18 -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:20.539 14:18:18 -- ftl/trim.sh@20 -- # killprocess 72718 00:17:20.539 Process with pid 72718 is not found 00:17:20.539 14:18:18 -- common/autotest_common.sh@936 -- # '[' -z 72718 ']' 00:17:20.539 14:18:18 -- common/autotest_common.sh@940 -- # kill -0 72718 00:17:20.539 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (72718) - No such process 00:17:20.539 14:18:18 -- common/autotest_common.sh@963 -- # echo 'Process with pid 72718 is not found' 00:17:20.539 00:17:20.539 real 1m25.867s 00:17:20.539 user 1m39.219s 00:17:20.539 sys 0m23.057s 00:17:20.539 14:18:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:20.539 14:18:18 -- common/autotest_common.sh@10 -- # set +x 00:17:20.539 ************************************ 00:17:20.539 END TEST ftl_trim 00:17:20.539 ************************************ 00:17:20.539 14:18:18 -- ftl/ftl.sh@77 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:17:20.539 14:18:18 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:20.539 14:18:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:20.539 14:18:18 -- common/autotest_common.sh@10 -- # set +x 00:17:20.539 ************************************ 00:17:20.539 START TEST ftl_restore 00:17:20.539 ************************************ 00:17:20.539 14:18:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:17:20.539 * Looking for test storage... 00:17:20.539 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:20.539 14:18:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:20.539 14:18:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:20.539 14:18:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:20.808 14:18:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:20.808 14:18:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:20.808 14:18:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:20.808 14:18:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:20.808 14:18:19 -- scripts/common.sh@335 -- # IFS=.-: 00:17:20.808 14:18:19 -- scripts/common.sh@335 -- # read -ra ver1 00:17:20.808 14:18:19 -- scripts/common.sh@336 -- # IFS=.-: 00:17:20.808 14:18:19 -- scripts/common.sh@336 -- # read -ra ver2 00:17:20.808 14:18:19 -- scripts/common.sh@337 -- # local 'op=<' 00:17:20.808 14:18:19 -- scripts/common.sh@339 -- # ver1_l=2 00:17:20.808 14:18:19 -- scripts/common.sh@340 -- # ver2_l=1 00:17:20.808 14:18:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:20.808 14:18:19 -- scripts/common.sh@343 -- # case "$op" in 00:17:20.808 14:18:19 -- scripts/common.sh@344 -- # : 1 00:17:20.808 14:18:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:20.808 14:18:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:20.808 14:18:19 -- scripts/common.sh@364 -- # decimal 1 00:17:20.808 14:18:19 -- scripts/common.sh@352 -- # local d=1 00:17:20.808 14:18:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:20.808 14:18:19 -- scripts/common.sh@354 -- # echo 1 00:17:20.808 14:18:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:20.808 14:18:19 -- scripts/common.sh@365 -- # decimal 2 00:17:20.808 14:18:19 -- scripts/common.sh@352 -- # local d=2 00:17:20.808 14:18:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:20.808 14:18:19 -- scripts/common.sh@354 -- # echo 2 00:17:20.808 14:18:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:20.808 14:18:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:20.808 14:18:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:20.808 14:18:19 -- scripts/common.sh@367 -- # return 0 00:17:20.808 14:18:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:20.808 14:18:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:20.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.808 --rc genhtml_branch_coverage=1 00:17:20.808 --rc genhtml_function_coverage=1 00:17:20.808 --rc genhtml_legend=1 00:17:20.808 --rc geninfo_all_blocks=1 00:17:20.808 --rc geninfo_unexecuted_blocks=1 00:17:20.808 00:17:20.808 ' 00:17:20.808 14:18:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:20.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.808 --rc genhtml_branch_coverage=1 00:17:20.808 --rc genhtml_function_coverage=1 00:17:20.808 --rc genhtml_legend=1 00:17:20.808 --rc geninfo_all_blocks=1 00:17:20.808 --rc geninfo_unexecuted_blocks=1 00:17:20.808 00:17:20.808 ' 00:17:20.808 14:18:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:20.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.808 --rc genhtml_branch_coverage=1 00:17:20.808 --rc genhtml_function_coverage=1 00:17:20.808 --rc genhtml_legend=1 00:17:20.808 --rc geninfo_all_blocks=1 00:17:20.808 --rc geninfo_unexecuted_blocks=1 00:17:20.808 00:17:20.808 ' 00:17:20.808 14:18:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:20.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.808 --rc genhtml_branch_coverage=1 00:17:20.808 --rc genhtml_function_coverage=1 00:17:20.808 --rc genhtml_legend=1 00:17:20.808 --rc geninfo_all_blocks=1 00:17:20.808 --rc geninfo_unexecuted_blocks=1 00:17:20.808 00:17:20.808 ' 00:17:20.808 14:18:19 -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:20.808 14:18:19 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:20.808 14:18:19 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:20.808 14:18:19 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:20.808 14:18:19 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:20.808 14:18:19 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:20.808 14:18:19 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:20.808 14:18:19 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:20.808 14:18:19 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:20.808 14:18:19 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:20.808 14:18:19 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:20.808 14:18:19 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:20.808 14:18:19 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:20.808 14:18:19 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:20.808 14:18:19 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:20.808 14:18:19 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:20.808 14:18:19 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:20.808 14:18:19 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:20.808 14:18:19 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:20.808 14:18:19 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:20.808 14:18:19 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:20.808 14:18:19 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:20.808 14:18:19 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:20.808 14:18:19 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:20.808 14:18:19 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:20.808 14:18:19 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:20.808 14:18:19 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:20.808 14:18:19 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:20.808 14:18:19 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:20.808 14:18:19 -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:20.808 14:18:19 -- ftl/restore.sh@13 -- # mktemp -d 00:17:20.808 14:18:19 -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.dmGqnQzwo7 00:17:20.808 14:18:19 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:20.808 14:18:19 -- ftl/restore.sh@16 -- # case $opt in 00:17:20.808 14:18:19 -- ftl/restore.sh@18 -- # nv_cache=0000:00:06.0 00:17:20.808 14:18:19 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:20.808 14:18:19 -- ftl/restore.sh@23 -- # shift 2 00:17:20.808 14:18:19 -- ftl/restore.sh@24 -- # device=0000:00:07.0 00:17:20.808 14:18:19 -- ftl/restore.sh@25 -- # timeout=240 00:17:20.808 14:18:19 -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:20.808 14:18:19 -- ftl/restore.sh@39 -- # svcpid=73040 00:17:20.808 14:18:19 -- ftl/restore.sh@41 -- # waitforlisten 73040 00:17:20.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.808 14:18:19 -- common/autotest_common.sh@829 -- # '[' -z 73040 ']' 00:17:20.808 14:18:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.808 14:18:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:20.808 14:18:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.808 14:18:19 -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:20.808 14:18:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:20.808 14:18:19 -- common/autotest_common.sh@10 -- # set +x 00:17:20.808 [2024-11-19 14:18:19.269821] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:20.808 [2024-11-19 14:18:19.269990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73040 ] 00:17:21.068 [2024-11-19 14:18:19.428187] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.328 [2024-11-19 14:18:19.645484] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:21.328 [2024-11-19 14:18:19.645713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.268 14:18:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:22.268 14:18:20 -- common/autotest_common.sh@862 -- # return 0 00:17:22.268 14:18:20 -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:17:22.268 14:18:20 -- ftl/common.sh@54 -- # local name=nvme0 00:17:22.268 14:18:20 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:17:22.268 14:18:20 -- ftl/common.sh@56 -- # local size=103424 00:17:22.268 14:18:20 -- ftl/common.sh@59 -- # local base_bdev 00:17:22.268 14:18:20 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:17:22.529 14:18:21 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:22.529 14:18:21 -- ftl/common.sh@62 -- # local base_size 00:17:22.529 14:18:21 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:22.529 14:18:21 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:17:22.529 14:18:21 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:22.529 14:18:21 -- common/autotest_common.sh@1369 -- # local bs 00:17:22.529 14:18:21 -- common/autotest_common.sh@1370 -- # local nb 00:17:22.529 14:18:21 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:22.789 14:18:21 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:22.789 { 00:17:22.789 "name": "nvme0n1", 00:17:22.789 "aliases": [ 00:17:22.789 "5a495964-db30-4132-9707-3f7d1d33c7bb" 00:17:22.789 ], 00:17:22.789 "product_name": "NVMe disk", 00:17:22.789 "block_size": 4096, 00:17:22.789 "num_blocks": 1310720, 00:17:22.789 "uuid": "5a495964-db30-4132-9707-3f7d1d33c7bb", 00:17:22.789 "assigned_rate_limits": { 00:17:22.789 "rw_ios_per_sec": 0, 00:17:22.789 "rw_mbytes_per_sec": 0, 00:17:22.789 "r_mbytes_per_sec": 0, 00:17:22.789 "w_mbytes_per_sec": 0 00:17:22.789 }, 00:17:22.789 "claimed": true, 00:17:22.789 "claim_type": "read_many_write_one", 00:17:22.789 "zoned": false, 00:17:22.789 "supported_io_types": { 00:17:22.789 "read": true, 00:17:22.789 "write": true, 00:17:22.789 "unmap": true, 00:17:22.789 "write_zeroes": true, 00:17:22.789 "flush": true, 00:17:22.789 "reset": true, 00:17:22.789 "compare": true, 00:17:22.789 "compare_and_write": false, 00:17:22.789 "abort": true, 00:17:22.789 "nvme_admin": true, 00:17:22.789 "nvme_io": true 00:17:22.789 }, 00:17:22.789 "driver_specific": { 00:17:22.789 "nvme": [ 00:17:22.789 { 00:17:22.789 "pci_address": "0000:00:07.0", 00:17:22.789 "trid": { 00:17:22.789 "trtype": "PCIe", 00:17:22.789 "traddr": "0000:00:07.0" 00:17:22.789 }, 00:17:22.790 "ctrlr_data": { 00:17:22.790 "cntlid": 0, 00:17:22.790 "vendor_id": "0x1b36", 00:17:22.790 "model_number": "QEMU NVMe Ctrl", 00:17:22.790 "serial_number": "12341", 00:17:22.790 "firmware_revision": "8.0.0", 00:17:22.790 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:22.790 "oacs": { 00:17:22.790 "security": 0, 00:17:22.790 "format": 1, 00:17:22.790 "firmware": 0, 00:17:22.790 "ns_manage": 1 00:17:22.790 }, 00:17:22.790 "multi_ctrlr": false, 00:17:22.790 "ana_reporting": false 00:17:22.790 }, 00:17:22.790 "vs": { 00:17:22.790 "nvme_version": "1.4" 00:17:22.790 }, 00:17:22.790 "ns_data": { 00:17:22.790 "id": 1, 00:17:22.790 "can_share": false 00:17:22.790 } 00:17:22.790 } 00:17:22.790 ], 00:17:22.790 "mp_policy": "active_passive" 00:17:22.790 } 00:17:22.790 } 00:17:22.790 ]' 00:17:22.790 14:18:21 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:22.790 14:18:21 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:22.790 14:18:21 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:23.051 14:18:21 -- common/autotest_common.sh@1373 -- # nb=1310720 00:17:23.051 14:18:21 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:17:23.051 14:18:21 -- common/autotest_common.sh@1377 -- # echo 5120 00:17:23.051 14:18:21 -- ftl/common.sh@63 -- # base_size=5120 00:17:23.051 14:18:21 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:23.051 14:18:21 -- ftl/common.sh@67 -- # clear_lvols 00:17:23.051 14:18:21 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:23.051 14:18:21 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:23.051 14:18:21 -- ftl/common.sh@28 -- # stores=8ab1ca4f-d2a0-43fb-902c-94f1bf013f3b 00:17:23.051 14:18:21 -- ftl/common.sh@29 -- # for lvs in $stores 00:17:23.051 14:18:21 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8ab1ca4f-d2a0-43fb-902c-94f1bf013f3b 00:17:23.311 14:18:21 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:23.572 14:18:22 -- ftl/common.sh@68 -- # lvs=8fb6f71e-5a68-4f25-ab30-cb6213984595 00:17:23.572 14:18:22 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8fb6f71e-5a68-4f25-ab30-cb6213984595 00:17:23.833 14:18:22 -- ftl/restore.sh@43 -- # split_bdev=06d7978d-8e80-4120-94d9-aa7fa21f2377 00:17:23.833 14:18:22 -- ftl/restore.sh@44 -- # '[' -n 0000:00:06.0 ']' 00:17:23.833 14:18:22 -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:06.0 06d7978d-8e80-4120-94d9-aa7fa21f2377 00:17:23.833 14:18:22 -- ftl/common.sh@35 -- # local name=nvc0 00:17:23.833 14:18:22 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:17:23.833 14:18:22 -- ftl/common.sh@37 -- # local base_bdev=06d7978d-8e80-4120-94d9-aa7fa21f2377 00:17:23.833 14:18:22 -- ftl/common.sh@38 -- # local cache_size= 00:17:23.833 14:18:22 -- ftl/common.sh@41 -- # get_bdev_size 06d7978d-8e80-4120-94d9-aa7fa21f2377 00:17:23.833 14:18:22 -- common/autotest_common.sh@1367 -- # local bdev_name=06d7978d-8e80-4120-94d9-aa7fa21f2377 00:17:23.833 14:18:22 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:23.833 14:18:22 -- common/autotest_common.sh@1369 -- # local bs 00:17:23.833 14:18:22 -- common/autotest_common.sh@1370 -- # local nb 00:17:23.833 14:18:22 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 06d7978d-8e80-4120-94d9-aa7fa21f2377 00:17:24.094 14:18:22 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:24.094 { 00:17:24.094 "name": "06d7978d-8e80-4120-94d9-aa7fa21f2377", 00:17:24.094 "aliases": [ 00:17:24.094 "lvs/nvme0n1p0" 00:17:24.094 ], 00:17:24.094 "product_name": "Logical Volume", 00:17:24.094 "block_size": 4096, 00:17:24.094 "num_blocks": 26476544, 00:17:24.094 "uuid": "06d7978d-8e80-4120-94d9-aa7fa21f2377", 00:17:24.095 "assigned_rate_limits": { 00:17:24.095 "rw_ios_per_sec": 0, 00:17:24.095 "rw_mbytes_per_sec": 0, 00:17:24.095 "r_mbytes_per_sec": 0, 00:17:24.095 "w_mbytes_per_sec": 0 00:17:24.095 }, 00:17:24.095 "claimed": false, 00:17:24.095 "zoned": false, 00:17:24.095 "supported_io_types": { 00:17:24.095 "read": true, 00:17:24.095 "write": true, 00:17:24.095 "unmap": true, 00:17:24.095 "write_zeroes": true, 00:17:24.095 "flush": false, 00:17:24.095 "reset": true, 00:17:24.095 "compare": false, 00:17:24.095 "compare_and_write": false, 00:17:24.095 "abort": false, 00:17:24.095 "nvme_admin": false, 00:17:24.095 "nvme_io": false 00:17:24.095 }, 00:17:24.095 "driver_specific": { 00:17:24.095 "lvol": { 00:17:24.095 "lvol_store_uuid": "8fb6f71e-5a68-4f25-ab30-cb6213984595", 00:17:24.095 "base_bdev": "nvme0n1", 00:17:24.095 "thin_provision": true, 00:17:24.095 "snapshot": false, 00:17:24.095 "clone": false, 00:17:24.095 "esnap_clone": false 00:17:24.095 } 00:17:24.095 } 00:17:24.095 } 00:17:24.095 ]' 00:17:24.095 14:18:22 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:24.095 14:18:22 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:24.095 14:18:22 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:24.095 14:18:22 -- common/autotest_common.sh@1373 -- # nb=26476544 00:17:24.095 14:18:22 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:17:24.095 14:18:22 -- common/autotest_common.sh@1377 -- # echo 103424 00:17:24.095 14:18:22 -- ftl/common.sh@41 -- # local base_size=5171 00:17:24.095 14:18:22 -- ftl/common.sh@44 -- # local nvc_bdev 00:17:24.095 14:18:22 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:17:24.356 14:18:22 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:24.356 14:18:22 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:24.356 14:18:22 -- ftl/common.sh@48 -- # get_bdev_size 06d7978d-8e80-4120-94d9-aa7fa21f2377 00:17:24.356 14:18:22 -- common/autotest_common.sh@1367 -- # local bdev_name=06d7978d-8e80-4120-94d9-aa7fa21f2377 00:17:24.356 14:18:22 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:24.356 14:18:22 -- common/autotest_common.sh@1369 -- # local bs 00:17:24.356 14:18:22 -- common/autotest_common.sh@1370 -- # local nb 00:17:24.356 14:18:22 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 06d7978d-8e80-4120-94d9-aa7fa21f2377 00:17:24.356 14:18:22 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:24.356 { 00:17:24.356 "name": "06d7978d-8e80-4120-94d9-aa7fa21f2377", 00:17:24.356 "aliases": [ 00:17:24.356 "lvs/nvme0n1p0" 00:17:24.356 ], 00:17:24.356 "product_name": "Logical Volume", 00:17:24.356 "block_size": 4096, 00:17:24.356 "num_blocks": 26476544, 00:17:24.356 "uuid": "06d7978d-8e80-4120-94d9-aa7fa21f2377", 00:17:24.356 "assigned_rate_limits": { 00:17:24.356 "rw_ios_per_sec": 0, 00:17:24.356 "rw_mbytes_per_sec": 0, 00:17:24.356 "r_mbytes_per_sec": 0, 00:17:24.356 "w_mbytes_per_sec": 0 00:17:24.356 }, 00:17:24.356 "claimed": false, 00:17:24.356 "zoned": false, 00:17:24.356 "supported_io_types": { 00:17:24.356 "read": true, 00:17:24.356 "write": true, 00:17:24.356 "unmap": true, 00:17:24.356 "write_zeroes": true, 00:17:24.356 "flush": false, 00:17:24.356 "reset": true, 00:17:24.356 "compare": false, 00:17:24.356 "compare_and_write": false, 00:17:24.356 "abort": false, 00:17:24.356 "nvme_admin": false, 00:17:24.356 "nvme_io": false 00:17:24.356 }, 00:17:24.356 "driver_specific": { 00:17:24.356 "lvol": { 00:17:24.356 "lvol_store_uuid": "8fb6f71e-5a68-4f25-ab30-cb6213984595", 00:17:24.356 "base_bdev": "nvme0n1", 00:17:24.356 "thin_provision": true, 00:17:24.356 "snapshot": false, 00:17:24.356 "clone": false, 00:17:24.356 "esnap_clone": false 00:17:24.356 } 00:17:24.356 } 00:17:24.356 } 00:17:24.356 ]' 00:17:24.356 14:18:22 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:24.617 14:18:22 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:24.617 14:18:22 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:24.617 14:18:22 -- common/autotest_common.sh@1373 -- # nb=26476544 00:17:24.617 14:18:22 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:17:24.617 14:18:22 -- common/autotest_common.sh@1377 -- # echo 103424 00:17:24.617 14:18:22 -- ftl/common.sh@48 -- # cache_size=5171 00:17:24.617 14:18:22 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:24.617 14:18:23 -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:17:24.617 14:18:23 -- ftl/restore.sh@48 -- # get_bdev_size 06d7978d-8e80-4120-94d9-aa7fa21f2377 00:17:24.617 14:18:23 -- common/autotest_common.sh@1367 -- # local bdev_name=06d7978d-8e80-4120-94d9-aa7fa21f2377 00:17:24.617 14:18:23 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:24.617 14:18:23 -- common/autotest_common.sh@1369 -- # local bs 00:17:24.617 14:18:23 -- common/autotest_common.sh@1370 -- # local nb 00:17:24.617 14:18:23 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 06d7978d-8e80-4120-94d9-aa7fa21f2377 00:17:24.877 14:18:23 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:24.877 { 00:17:24.877 "name": "06d7978d-8e80-4120-94d9-aa7fa21f2377", 00:17:24.877 "aliases": [ 00:17:24.877 "lvs/nvme0n1p0" 00:17:24.877 ], 00:17:24.877 "product_name": "Logical Volume", 00:17:24.877 "block_size": 4096, 00:17:24.877 "num_blocks": 26476544, 00:17:24.877 "uuid": "06d7978d-8e80-4120-94d9-aa7fa21f2377", 00:17:24.877 "assigned_rate_limits": { 00:17:24.877 "rw_ios_per_sec": 0, 00:17:24.877 "rw_mbytes_per_sec": 0, 00:17:24.877 "r_mbytes_per_sec": 0, 00:17:24.877 "w_mbytes_per_sec": 0 00:17:24.877 }, 00:17:24.877 "claimed": false, 00:17:24.877 "zoned": false, 00:17:24.877 "supported_io_types": { 00:17:24.877 "read": true, 00:17:24.877 "write": true, 00:17:24.877 "unmap": true, 00:17:24.877 "write_zeroes": true, 00:17:24.877 "flush": false, 00:17:24.877 "reset": true, 00:17:24.877 "compare": false, 00:17:24.877 "compare_and_write": false, 00:17:24.877 "abort": false, 00:17:24.877 "nvme_admin": false, 00:17:24.877 "nvme_io": false 00:17:24.877 }, 00:17:24.877 "driver_specific": { 00:17:24.877 "lvol": { 00:17:24.877 "lvol_store_uuid": "8fb6f71e-5a68-4f25-ab30-cb6213984595", 00:17:24.877 "base_bdev": "nvme0n1", 00:17:24.877 "thin_provision": true, 00:17:24.877 "snapshot": false, 00:17:24.877 "clone": false, 00:17:24.877 "esnap_clone": false 00:17:24.877 } 00:17:24.877 } 00:17:24.877 } 00:17:24.877 ]' 00:17:24.877 14:18:23 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:24.877 14:18:23 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:24.877 14:18:23 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:24.877 14:18:23 -- common/autotest_common.sh@1373 -- # nb=26476544 00:17:24.877 14:18:23 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:17:24.877 14:18:23 -- common/autotest_common.sh@1377 -- # echo 103424 00:17:24.877 14:18:23 -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:17:24.877 14:18:23 -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 06d7978d-8e80-4120-94d9-aa7fa21f2377 --l2p_dram_limit 10' 00:17:24.877 14:18:23 -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:17:24.877 14:18:23 -- ftl/restore.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:17:24.877 14:18:23 -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:17:24.877 14:18:23 -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:17:24.877 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:17:24.877 14:18:23 -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 06d7978d-8e80-4120-94d9-aa7fa21f2377 --l2p_dram_limit 10 -c nvc0n1p0 00:17:25.140 [2024-11-19 14:18:23.609232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.140 [2024-11-19 14:18:23.609268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:25.140 [2024-11-19 14:18:23.609279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:25.140 [2024-11-19 14:18:23.609287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.140 [2024-11-19 14:18:23.609329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.140 [2024-11-19 14:18:23.609336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:25.140 [2024-11-19 14:18:23.609344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:25.140 [2024-11-19 14:18:23.609350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.140 [2024-11-19 14:18:23.609366] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:25.140 [2024-11-19 14:18:23.609958] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:25.140 [2024-11-19 14:18:23.609974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.140 [2024-11-19 14:18:23.609979] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:25.140 [2024-11-19 14:18:23.609987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.609 ms 00:17:25.140 [2024-11-19 14:18:23.609993] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.140 [2024-11-19 14:18:23.610018] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 75a07b1c-071a-49d1-8758-82829c9986d4 00:17:25.140 [2024-11-19 14:18:23.610938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.140 [2024-11-19 14:18:23.610956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:25.140 [2024-11-19 14:18:23.610963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:25.140 [2024-11-19 14:18:23.610970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.140 [2024-11-19 14:18:23.615627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.140 [2024-11-19 14:18:23.615653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:25.140 [2024-11-19 14:18:23.615661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.621 ms 00:17:25.140 [2024-11-19 14:18:23.615668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.140 [2024-11-19 14:18:23.615771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.140 [2024-11-19 14:18:23.615782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:25.140 [2024-11-19 14:18:23.615788] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:25.140 [2024-11-19 14:18:23.615797] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.140 [2024-11-19 14:18:23.615826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.140 [2024-11-19 14:18:23.615840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:25.140 [2024-11-19 14:18:23.615846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:25.140 [2024-11-19 14:18:23.615853] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.140 [2024-11-19 14:18:23.615871] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:25.140 [2024-11-19 14:18:23.618764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.140 [2024-11-19 14:18:23.618787] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:25.140 [2024-11-19 14:18:23.618796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.896 ms 00:17:25.140 [2024-11-19 14:18:23.618802] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.140 [2024-11-19 14:18:23.618829] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.140 [2024-11-19 14:18:23.618835] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:25.140 [2024-11-19 14:18:23.618842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:25.140 [2024-11-19 14:18:23.618848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.140 [2024-11-19 14:18:23.618871] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:25.140 [2024-11-19 14:18:23.618966] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:25.140 [2024-11-19 14:18:23.618979] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:25.140 [2024-11-19 14:18:23.618987] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:25.140 [2024-11-19 14:18:23.618996] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:25.140 [2024-11-19 14:18:23.619003] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:25.140 [2024-11-19 14:18:23.619012] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:25.140 [2024-11-19 14:18:23.619023] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:25.140 [2024-11-19 14:18:23.619030] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:25.140 [2024-11-19 14:18:23.619035] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:25.140 [2024-11-19 14:18:23.619041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.140 [2024-11-19 14:18:23.619048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:25.140 [2024-11-19 14:18:23.619055] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:17:25.141 [2024-11-19 14:18:23.619060] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.141 [2024-11-19 14:18:23.619109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.141 [2024-11-19 14:18:23.619115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:25.141 [2024-11-19 14:18:23.619123] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:25.141 [2024-11-19 14:18:23.619129] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.141 [2024-11-19 14:18:23.619186] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:25.141 [2024-11-19 14:18:23.619193] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:25.141 [2024-11-19 14:18:23.619200] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:25.141 [2024-11-19 14:18:23.619206] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.141 [2024-11-19 14:18:23.619212] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:25.141 [2024-11-19 14:18:23.619217] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:25.141 [2024-11-19 14:18:23.619224] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:25.141 [2024-11-19 14:18:23.619228] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:25.141 [2024-11-19 14:18:23.619235] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:25.141 [2024-11-19 14:18:23.619240] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:25.141 [2024-11-19 14:18:23.619262] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:25.141 [2024-11-19 14:18:23.619268] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:25.141 [2024-11-19 14:18:23.619276] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:25.141 [2024-11-19 14:18:23.619281] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:25.141 [2024-11-19 14:18:23.619290] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:17:25.141 [2024-11-19 14:18:23.619295] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.141 [2024-11-19 14:18:23.619302] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:25.141 [2024-11-19 14:18:23.619307] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:17:25.141 [2024-11-19 14:18:23.619313] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.141 [2024-11-19 14:18:23.619318] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:25.141 [2024-11-19 14:18:23.619325] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:17:25.141 [2024-11-19 14:18:23.619330] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:25.141 [2024-11-19 14:18:23.619336] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:25.141 [2024-11-19 14:18:23.619341] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:25.141 [2024-11-19 14:18:23.619347] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:25.141 [2024-11-19 14:18:23.619352] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:25.141 [2024-11-19 14:18:23.619358] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:17:25.141 [2024-11-19 14:18:23.619363] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:25.141 [2024-11-19 14:18:23.619369] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:25.141 [2024-11-19 14:18:23.619374] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:25.141 [2024-11-19 14:18:23.619380] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:25.141 [2024-11-19 14:18:23.619385] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:25.141 [2024-11-19 14:18:23.619392] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:17:25.141 [2024-11-19 14:18:23.619397] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:25.141 [2024-11-19 14:18:23.619403] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:25.141 [2024-11-19 14:18:23.619408] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:25.141 [2024-11-19 14:18:23.619414] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:25.141 [2024-11-19 14:18:23.619419] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:25.141 [2024-11-19 14:18:23.619426] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:17:25.141 [2024-11-19 14:18:23.619430] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:25.141 [2024-11-19 14:18:23.619437] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:25.141 [2024-11-19 14:18:23.619442] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:25.141 [2024-11-19 14:18:23.619449] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:25.141 [2024-11-19 14:18:23.619454] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.141 [2024-11-19 14:18:23.619462] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:25.141 [2024-11-19 14:18:23.619467] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:25.141 [2024-11-19 14:18:23.619474] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:25.141 [2024-11-19 14:18:23.619480] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:25.141 [2024-11-19 14:18:23.619488] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:25.141 [2024-11-19 14:18:23.619493] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:25.141 [2024-11-19 14:18:23.619500] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:25.141 [2024-11-19 14:18:23.619507] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:25.141 [2024-11-19 14:18:23.619514] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:25.141 [2024-11-19 14:18:23.619520] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:17:25.141 [2024-11-19 14:18:23.619526] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:17:25.141 [2024-11-19 14:18:23.619531] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:17:25.141 [2024-11-19 14:18:23.619548] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:17:25.141 [2024-11-19 14:18:23.619553] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:17:25.141 [2024-11-19 14:18:23.619560] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:17:25.141 [2024-11-19 14:18:23.619565] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:17:25.141 [2024-11-19 14:18:23.619572] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:17:25.141 [2024-11-19 14:18:23.619577] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:17:25.141 [2024-11-19 14:18:23.619583] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:17:25.141 [2024-11-19 14:18:23.619588] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:17:25.141 [2024-11-19 14:18:23.619597] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:17:25.141 [2024-11-19 14:18:23.619602] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:25.141 [2024-11-19 14:18:23.619610] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:25.141 [2024-11-19 14:18:23.619616] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:25.141 [2024-11-19 14:18:23.619622] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:25.141 [2024-11-19 14:18:23.619627] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:25.141 [2024-11-19 14:18:23.619634] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:25.141 [2024-11-19 14:18:23.619640] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.141 [2024-11-19 14:18:23.619646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:25.141 [2024-11-19 14:18:23.619652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.490 ms 00:17:25.141 [2024-11-19 14:18:23.619659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.141 [2024-11-19 14:18:23.631452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.141 [2024-11-19 14:18:23.631478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:25.141 [2024-11-19 14:18:23.631486] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.763 ms 00:17:25.141 [2024-11-19 14:18:23.631492] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.141 [2024-11-19 14:18:23.631559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.141 [2024-11-19 14:18:23.631568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:25.141 [2024-11-19 14:18:23.631576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:25.141 [2024-11-19 14:18:23.631582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.141 [2024-11-19 14:18:23.655344] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.141 [2024-11-19 14:18:23.655367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:25.141 [2024-11-19 14:18:23.655375] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.729 ms 00:17:25.141 [2024-11-19 14:18:23.655383] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.141 [2024-11-19 14:18:23.655406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.141 [2024-11-19 14:18:23.655413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:25.141 [2024-11-19 14:18:23.655420] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:25.141 [2024-11-19 14:18:23.655429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.141 [2024-11-19 14:18:23.655717] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.141 [2024-11-19 14:18:23.655737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:25.142 [2024-11-19 14:18:23.655744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:17:25.142 [2024-11-19 14:18:23.655751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.142 [2024-11-19 14:18:23.655838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.142 [2024-11-19 14:18:23.655850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:25.142 [2024-11-19 14:18:23.655856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:17:25.142 [2024-11-19 14:18:23.655863] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.142 [2024-11-19 14:18:23.667730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.142 [2024-11-19 14:18:23.667752] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:25.142 [2024-11-19 14:18:23.667759] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.844 ms 00:17:25.142 [2024-11-19 14:18:23.667766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.142 [2024-11-19 14:18:23.676643] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:25.142 [2024-11-19 14:18:23.678897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.142 [2024-11-19 14:18:23.678915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:25.142 [2024-11-19 14:18:23.678925] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.058 ms 00:17:25.142 [2024-11-19 14:18:23.678931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.402 [2024-11-19 14:18:23.759278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.402 [2024-11-19 14:18:23.759318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:25.402 [2024-11-19 14:18:23.759334] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.324 ms 00:17:25.402 [2024-11-19 14:18:23.759342] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.402 [2024-11-19 14:18:23.759386] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:17:25.403 [2024-11-19 14:18:23.759397] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:17:29.765 [2024-11-19 14:18:27.650820] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.765 [2024-11-19 14:18:27.650907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:29.765 [2024-11-19 14:18:27.650929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3891.406 ms 00:17:29.765 [2024-11-19 14:18:27.650938] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.765 [2024-11-19 14:18:27.651138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.766 [2024-11-19 14:18:27.651150] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:29.766 [2024-11-19 14:18:27.651166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:17:29.766 [2024-11-19 14:18:27.651174] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.766 [2024-11-19 14:18:27.677787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.766 [2024-11-19 14:18:27.677830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:29.766 [2024-11-19 14:18:27.677847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.556 ms 00:17:29.766 [2024-11-19 14:18:27.677856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.766 [2024-11-19 14:18:27.699563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.766 [2024-11-19 14:18:27.699604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:29.766 [2024-11-19 14:18:27.699621] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.664 ms 00:17:29.766 [2024-11-19 14:18:27.699628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.766 [2024-11-19 14:18:27.699908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.766 [2024-11-19 14:18:27.699923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:29.766 [2024-11-19 14:18:27.699933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:17:29.766 [2024-11-19 14:18:27.699940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.766 [2024-11-19 14:18:27.758713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.766 [2024-11-19 14:18:27.758742] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:29.766 [2024-11-19 14:18:27.758754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.735 ms 00:17:29.766 [2024-11-19 14:18:27.758760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.766 [2024-11-19 14:18:27.778018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.766 [2024-11-19 14:18:27.778046] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:29.766 [2024-11-19 14:18:27.778057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.212 ms 00:17:29.766 [2024-11-19 14:18:27.778063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.766 [2024-11-19 14:18:27.779078] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.766 [2024-11-19 14:18:27.779103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:29.766 [2024-11-19 14:18:27.779114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.983 ms 00:17:29.766 [2024-11-19 14:18:27.779120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.766 [2024-11-19 14:18:27.797315] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.766 [2024-11-19 14:18:27.797339] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:29.766 [2024-11-19 14:18:27.797349] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.166 ms 00:17:29.766 [2024-11-19 14:18:27.797354] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.766 [2024-11-19 14:18:27.797390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.766 [2024-11-19 14:18:27.797397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:29.766 [2024-11-19 14:18:27.797405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:29.766 [2024-11-19 14:18:27.797411] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.766 [2024-11-19 14:18:27.797481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.766 [2024-11-19 14:18:27.797488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:29.766 [2024-11-19 14:18:27.797496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:29.766 [2024-11-19 14:18:27.797502] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.766 [2024-11-19 14:18:27.798318] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4188.746 ms, result 0 00:17:29.766 { 00:17:29.766 "name": "ftl0", 00:17:29.766 "uuid": "75a07b1c-071a-49d1-8758-82829c9986d4" 00:17:29.766 } 00:17:29.766 14:18:27 -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:17:29.766 14:18:27 -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:29.766 14:18:28 -- ftl/restore.sh@63 -- # echo ']}' 00:17:29.766 14:18:28 -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:29.766 [2024-11-19 14:18:28.189859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.766 [2024-11-19 14:18:28.189903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:29.766 [2024-11-19 14:18:28.189913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:29.766 [2024-11-19 14:18:28.189921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.766 [2024-11-19 14:18:28.189938] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:29.766 [2024-11-19 14:18:28.191975] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.766 [2024-11-19 14:18:28.191995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:29.766 [2024-11-19 14:18:28.192005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.023 ms 00:17:29.766 [2024-11-19 14:18:28.192016] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.766 [2024-11-19 14:18:28.192216] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.766 [2024-11-19 14:18:28.192226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:29.766 [2024-11-19 14:18:28.192234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:17:29.766 [2024-11-19 14:18:28.192240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.766 [2024-11-19 14:18:28.194657] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.766 [2024-11-19 14:18:28.194671] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:29.766 [2024-11-19 14:18:28.194680] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.404 ms 00:17:29.766 [2024-11-19 14:18:28.194686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.766 [2024-11-19 14:18:28.199397] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.766 [2024-11-19 14:18:28.199418] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:29.766 [2024-11-19 14:18:28.199426] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.693 ms 00:17:29.766 [2024-11-19 14:18:28.199432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.766 [2024-11-19 14:18:28.217424] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.766 [2024-11-19 14:18:28.217446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:29.766 [2024-11-19 14:18:28.217456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.939 ms 00:17:29.766 [2024-11-19 14:18:28.217461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.766 [2024-11-19 14:18:28.229426] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.766 [2024-11-19 14:18:28.229449] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:29.766 [2024-11-19 14:18:28.229459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.935 ms 00:17:29.766 [2024-11-19 14:18:28.229465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.766 [2024-11-19 14:18:28.229569] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.766 [2024-11-19 14:18:28.229577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:29.766 [2024-11-19 14:18:28.229586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:17:29.766 [2024-11-19 14:18:28.229593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.766 [2024-11-19 14:18:28.247424] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.766 [2024-11-19 14:18:28.247446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:29.766 [2024-11-19 14:18:28.247456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.815 ms 00:17:29.766 [2024-11-19 14:18:28.247461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.766 [2024-11-19 14:18:28.264997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.766 [2024-11-19 14:18:28.265017] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:29.766 [2024-11-19 14:18:28.265026] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.508 ms 00:17:29.766 [2024-11-19 14:18:28.265031] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.766 [2024-11-19 14:18:28.281873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.766 [2024-11-19 14:18:28.281899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:29.766 [2024-11-19 14:18:28.281908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.814 ms 00:17:29.766 [2024-11-19 14:18:28.281914] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.766 [2024-11-19 14:18:28.299144] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.766 [2024-11-19 14:18:28.299165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:29.766 [2024-11-19 14:18:28.299173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.176 ms 00:17:29.766 [2024-11-19 14:18:28.299178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.766 [2024-11-19 14:18:28.299206] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:29.766 [2024-11-19 14:18:28.299219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:29.766 [2024-11-19 14:18:28.299228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:29.766 [2024-11-19 14:18:28.299233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:29.766 [2024-11-19 14:18:28.299240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:29.766 [2024-11-19 14:18:28.299259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:29.766 [2024-11-19 14:18:28.299266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:29.766 [2024-11-19 14:18:28.299272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:29.766 [2024-11-19 14:18:28.299278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:29.766 [2024-11-19 14:18:28.299284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:29.766 [2024-11-19 14:18:28.299292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:29.767 [2024-11-19 14:18:28.299859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:29.768 [2024-11-19 14:18:28.299865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:29.768 [2024-11-19 14:18:28.299885] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:29.768 [2024-11-19 14:18:28.299892] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 75a07b1c-071a-49d1-8758-82829c9986d4 00:17:29.768 [2024-11-19 14:18:28.299898] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:29.768 [2024-11-19 14:18:28.299905] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:29.768 [2024-11-19 14:18:28.299910] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:29.768 [2024-11-19 14:18:28.299917] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:29.768 [2024-11-19 14:18:28.299922] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:29.768 [2024-11-19 14:18:28.299929] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:29.768 [2024-11-19 14:18:28.299934] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:29.768 [2024-11-19 14:18:28.299940] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:29.768 [2024-11-19 14:18:28.299945] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:29.768 [2024-11-19 14:18:28.299953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.768 [2024-11-19 14:18:28.299958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:29.768 [2024-11-19 14:18:28.299967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.748 ms 00:17:29.768 [2024-11-19 14:18:28.299972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.768 [2024-11-19 14:18:28.309612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.768 [2024-11-19 14:18:28.309632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:29.768 [2024-11-19 14:18:28.309641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.615 ms 00:17:29.768 [2024-11-19 14:18:28.309646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.768 [2024-11-19 14:18:28.309792] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.768 [2024-11-19 14:18:28.309800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:29.768 [2024-11-19 14:18:28.309807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:17:29.768 [2024-11-19 14:18:28.309812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.028 [2024-11-19 14:18:28.345006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.028 [2024-11-19 14:18:28.345031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:30.028 [2024-11-19 14:18:28.345041] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.028 [2024-11-19 14:18:28.345046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.028 [2024-11-19 14:18:28.345091] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.028 [2024-11-19 14:18:28.345098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:30.028 [2024-11-19 14:18:28.345105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.028 [2024-11-19 14:18:28.345111] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.028 [2024-11-19 14:18:28.345159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.028 [2024-11-19 14:18:28.345166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:30.028 [2024-11-19 14:18:28.345174] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.028 [2024-11-19 14:18:28.345179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.028 [2024-11-19 14:18:28.345193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.028 [2024-11-19 14:18:28.345198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:30.028 [2024-11-19 14:18:28.345207] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.028 [2024-11-19 14:18:28.345212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.028 [2024-11-19 14:18:28.404327] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.028 [2024-11-19 14:18:28.404356] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:30.028 [2024-11-19 14:18:28.404367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.028 [2024-11-19 14:18:28.404373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.028 [2024-11-19 14:18:28.427125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.028 [2024-11-19 14:18:28.427151] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:30.028 [2024-11-19 14:18:28.427160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.028 [2024-11-19 14:18:28.427166] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.028 [2024-11-19 14:18:28.427210] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.028 [2024-11-19 14:18:28.427217] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:30.028 [2024-11-19 14:18:28.427224] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.028 [2024-11-19 14:18:28.427230] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.028 [2024-11-19 14:18:28.427271] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.028 [2024-11-19 14:18:28.427278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:30.028 [2024-11-19 14:18:28.427286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.028 [2024-11-19 14:18:28.427293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.028 [2024-11-19 14:18:28.427362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.028 [2024-11-19 14:18:28.427370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:30.028 [2024-11-19 14:18:28.427376] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.028 [2024-11-19 14:18:28.427382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.028 [2024-11-19 14:18:28.427407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.028 [2024-11-19 14:18:28.427413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:30.028 [2024-11-19 14:18:28.427420] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.028 [2024-11-19 14:18:28.427425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.028 [2024-11-19 14:18:28.427456] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.028 [2024-11-19 14:18:28.427462] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:30.028 [2024-11-19 14:18:28.427469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.028 [2024-11-19 14:18:28.427474] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.028 [2024-11-19 14:18:28.427507] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.028 [2024-11-19 14:18:28.427513] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:30.028 [2024-11-19 14:18:28.427520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.028 [2024-11-19 14:18:28.427527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.028 [2024-11-19 14:18:28.427623] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 237.735 ms, result 0 00:17:30.028 true 00:17:30.028 14:18:28 -- ftl/restore.sh@66 -- # killprocess 73040 00:17:30.028 14:18:28 -- common/autotest_common.sh@936 -- # '[' -z 73040 ']' 00:17:30.028 14:18:28 -- common/autotest_common.sh@940 -- # kill -0 73040 00:17:30.028 14:18:28 -- common/autotest_common.sh@941 -- # uname 00:17:30.028 14:18:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:30.028 14:18:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73040 00:17:30.028 14:18:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:30.028 14:18:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:30.028 killing process with pid 73040 00:17:30.028 14:18:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73040' 00:17:30.028 14:18:28 -- common/autotest_common.sh@955 -- # kill 73040 00:17:30.028 14:18:28 -- common/autotest_common.sh@960 -- # wait 73040 00:17:35.322 14:18:33 -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:17:39.541 262144+0 records in 00:17:39.541 262144+0 records out 00:17:39.541 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.99854 s, 269 MB/s 00:17:39.541 14:18:37 -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:17:41.454 14:18:39 -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:41.454 [2024-11-19 14:18:39.825468] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:41.454 [2024-11-19 14:18:39.825560] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73279 ] 00:17:41.454 [2024-11-19 14:18:39.969060] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.714 [2024-11-19 14:18:40.160572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.975 [2024-11-19 14:18:40.416954] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:41.975 [2024-11-19 14:18:40.417013] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:42.238 [2024-11-19 14:18:40.568320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.238 [2024-11-19 14:18:40.568363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:42.238 [2024-11-19 14:18:40.568375] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:42.238 [2024-11-19 14:18:40.568385] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.238 [2024-11-19 14:18:40.568431] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.238 [2024-11-19 14:18:40.568441] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:42.238 [2024-11-19 14:18:40.568449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:42.238 [2024-11-19 14:18:40.568456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.238 [2024-11-19 14:18:40.568471] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:42.238 [2024-11-19 14:18:40.569195] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:42.238 [2024-11-19 14:18:40.569216] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.238 [2024-11-19 14:18:40.569223] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:42.238 [2024-11-19 14:18:40.569231] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.749 ms 00:17:42.238 [2024-11-19 14:18:40.569238] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.238 [2024-11-19 14:18:40.570334] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:42.238 [2024-11-19 14:18:40.582939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.238 [2024-11-19 14:18:40.582974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:42.238 [2024-11-19 14:18:40.582985] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.606 ms 00:17:42.238 [2024-11-19 14:18:40.582993] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.238 [2024-11-19 14:18:40.583053] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.238 [2024-11-19 14:18:40.583063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:42.238 [2024-11-19 14:18:40.583071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:42.238 [2024-11-19 14:18:40.583078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.238 [2024-11-19 14:18:40.588160] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.238 [2024-11-19 14:18:40.588190] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:42.238 [2024-11-19 14:18:40.588199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.026 ms 00:17:42.238 [2024-11-19 14:18:40.588207] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.238 [2024-11-19 14:18:40.588288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.238 [2024-11-19 14:18:40.588297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:42.238 [2024-11-19 14:18:40.588305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:17:42.238 [2024-11-19 14:18:40.588312] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.238 [2024-11-19 14:18:40.588353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.238 [2024-11-19 14:18:40.588363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:42.238 [2024-11-19 14:18:40.588370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:42.238 [2024-11-19 14:18:40.588377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.238 [2024-11-19 14:18:40.588404] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:42.238 [2024-11-19 14:18:40.592032] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.238 [2024-11-19 14:18:40.592060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:42.238 [2024-11-19 14:18:40.592069] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.638 ms 00:17:42.238 [2024-11-19 14:18:40.592076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.238 [2024-11-19 14:18:40.592105] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.238 [2024-11-19 14:18:40.592113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:42.238 [2024-11-19 14:18:40.592121] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:42.238 [2024-11-19 14:18:40.592130] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.238 [2024-11-19 14:18:40.592149] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:42.238 [2024-11-19 14:18:40.592165] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:42.238 [2024-11-19 14:18:40.592197] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:42.238 [2024-11-19 14:18:40.592211] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:42.238 [2024-11-19 14:18:40.592283] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:42.238 [2024-11-19 14:18:40.592292] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:42.238 [2024-11-19 14:18:40.592304] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:42.238 [2024-11-19 14:18:40.592314] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:42.238 [2024-11-19 14:18:40.592322] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:42.238 [2024-11-19 14:18:40.592329] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:42.238 [2024-11-19 14:18:40.592336] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:42.238 [2024-11-19 14:18:40.592343] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:42.238 [2024-11-19 14:18:40.592350] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:42.238 [2024-11-19 14:18:40.592357] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.238 [2024-11-19 14:18:40.592364] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:42.238 [2024-11-19 14:18:40.592371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:17:42.238 [2024-11-19 14:18:40.592378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.238 [2024-11-19 14:18:40.592438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.238 [2024-11-19 14:18:40.592446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:42.238 [2024-11-19 14:18:40.592453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:17:42.238 [2024-11-19 14:18:40.592460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.238 [2024-11-19 14:18:40.592539] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:42.238 [2024-11-19 14:18:40.592555] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:42.238 [2024-11-19 14:18:40.592564] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:42.238 [2024-11-19 14:18:40.592571] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.238 [2024-11-19 14:18:40.592579] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:42.238 [2024-11-19 14:18:40.592585] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:42.238 [2024-11-19 14:18:40.592592] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:42.238 [2024-11-19 14:18:40.592598] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:42.238 [2024-11-19 14:18:40.592607] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:42.238 [2024-11-19 14:18:40.592613] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:42.238 [2024-11-19 14:18:40.592620] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:42.238 [2024-11-19 14:18:40.592627] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:42.238 [2024-11-19 14:18:40.592633] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:42.238 [2024-11-19 14:18:40.592640] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:42.238 [2024-11-19 14:18:40.592646] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:17:42.238 [2024-11-19 14:18:40.592653] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.238 [2024-11-19 14:18:40.592665] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:42.238 [2024-11-19 14:18:40.592672] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:17:42.238 [2024-11-19 14:18:40.592678] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.238 [2024-11-19 14:18:40.592683] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:42.238 [2024-11-19 14:18:40.592690] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:17:42.238 [2024-11-19 14:18:40.592696] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:42.238 [2024-11-19 14:18:40.592702] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:42.238 [2024-11-19 14:18:40.592708] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:42.238 [2024-11-19 14:18:40.592715] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:42.238 [2024-11-19 14:18:40.592721] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:42.238 [2024-11-19 14:18:40.592727] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:17:42.238 [2024-11-19 14:18:40.592733] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:42.238 [2024-11-19 14:18:40.592739] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:42.238 [2024-11-19 14:18:40.592745] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:42.238 [2024-11-19 14:18:40.592752] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:42.239 [2024-11-19 14:18:40.592758] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:42.239 [2024-11-19 14:18:40.592764] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:17:42.239 [2024-11-19 14:18:40.592770] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:42.239 [2024-11-19 14:18:40.592776] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:42.239 [2024-11-19 14:18:40.592782] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:42.239 [2024-11-19 14:18:40.592788] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:42.239 [2024-11-19 14:18:40.592795] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:42.239 [2024-11-19 14:18:40.592801] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:17:42.239 [2024-11-19 14:18:40.592807] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:42.239 [2024-11-19 14:18:40.592814] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:42.239 [2024-11-19 14:18:40.592824] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:42.239 [2024-11-19 14:18:40.592831] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:42.239 [2024-11-19 14:18:40.592837] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.239 [2024-11-19 14:18:40.592845] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:42.239 [2024-11-19 14:18:40.592851] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:42.239 [2024-11-19 14:18:40.592857] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:42.239 [2024-11-19 14:18:40.592864] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:42.239 [2024-11-19 14:18:40.592870] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:42.239 [2024-11-19 14:18:40.592888] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:42.239 [2024-11-19 14:18:40.592896] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:42.239 [2024-11-19 14:18:40.592905] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:42.239 [2024-11-19 14:18:40.592913] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:42.239 [2024-11-19 14:18:40.592920] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:17:42.239 [2024-11-19 14:18:40.592927] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:17:42.239 [2024-11-19 14:18:40.592934] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:17:42.239 [2024-11-19 14:18:40.592941] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:17:42.239 [2024-11-19 14:18:40.592947] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:17:42.239 [2024-11-19 14:18:40.592954] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:17:42.239 [2024-11-19 14:18:40.592961] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:17:42.239 [2024-11-19 14:18:40.592967] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:17:42.239 [2024-11-19 14:18:40.592976] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:17:42.239 [2024-11-19 14:18:40.592983] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:17:42.239 [2024-11-19 14:18:40.592990] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:17:42.239 [2024-11-19 14:18:40.592998] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:17:42.239 [2024-11-19 14:18:40.593004] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:42.239 [2024-11-19 14:18:40.593012] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:42.239 [2024-11-19 14:18:40.593020] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:42.239 [2024-11-19 14:18:40.593027] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:42.239 [2024-11-19 14:18:40.593034] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:42.239 [2024-11-19 14:18:40.593041] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:42.239 [2024-11-19 14:18:40.593048] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.239 [2024-11-19 14:18:40.593057] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:42.239 [2024-11-19 14:18:40.593064] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:17:42.239 [2024-11-19 14:18:40.593071] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.239 [2024-11-19 14:18:40.608074] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.239 [2024-11-19 14:18:40.608105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:42.239 [2024-11-19 14:18:40.608115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.957 ms 00:17:42.239 [2024-11-19 14:18:40.608126] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.239 [2024-11-19 14:18:40.608207] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.239 [2024-11-19 14:18:40.608215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:42.239 [2024-11-19 14:18:40.608222] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:42.239 [2024-11-19 14:18:40.608229] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.239 [2024-11-19 14:18:40.648551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.239 [2024-11-19 14:18:40.648595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:42.239 [2024-11-19 14:18:40.648607] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.280 ms 00:17:42.239 [2024-11-19 14:18:40.648615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.239 [2024-11-19 14:18:40.648655] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.239 [2024-11-19 14:18:40.648664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:42.239 [2024-11-19 14:18:40.648672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:42.239 [2024-11-19 14:18:40.648680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.239 [2024-11-19 14:18:40.649108] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.239 [2024-11-19 14:18:40.649135] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:42.239 [2024-11-19 14:18:40.649145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:17:42.239 [2024-11-19 14:18:40.649157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.239 [2024-11-19 14:18:40.649268] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.239 [2024-11-19 14:18:40.649277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:42.239 [2024-11-19 14:18:40.649285] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:17:42.239 [2024-11-19 14:18:40.649292] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.239 [2024-11-19 14:18:40.663840] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.239 [2024-11-19 14:18:40.663887] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:42.239 [2024-11-19 14:18:40.663898] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.527 ms 00:17:42.239 [2024-11-19 14:18:40.663905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.239 [2024-11-19 14:18:40.677487] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:42.239 [2024-11-19 14:18:40.677532] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:42.239 [2024-11-19 14:18:40.677543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.239 [2024-11-19 14:18:40.677551] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:42.239 [2024-11-19 14:18:40.677559] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.550 ms 00:17:42.239 [2024-11-19 14:18:40.677566] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.239 [2024-11-19 14:18:40.702627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.239 [2024-11-19 14:18:40.702675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:42.239 [2024-11-19 14:18:40.702688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.018 ms 00:17:42.239 [2024-11-19 14:18:40.702696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.239 [2024-11-19 14:18:40.715275] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.239 [2024-11-19 14:18:40.715319] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:42.239 [2024-11-19 14:18:40.715331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.526 ms 00:17:42.239 [2024-11-19 14:18:40.715338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.239 [2024-11-19 14:18:40.727852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.239 [2024-11-19 14:18:40.727910] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:42.239 [2024-11-19 14:18:40.727931] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.472 ms 00:17:42.239 [2024-11-19 14:18:40.727939] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.239 [2024-11-19 14:18:40.728318] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.239 [2024-11-19 14:18:40.728340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:42.239 [2024-11-19 14:18:40.728350] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:17:42.239 [2024-11-19 14:18:40.728360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.239 [2024-11-19 14:18:40.795654] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.239 [2024-11-19 14:18:40.795712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:42.239 [2024-11-19 14:18:40.795727] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.276 ms 00:17:42.239 [2024-11-19 14:18:40.795736] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.499 [2024-11-19 14:18:40.807212] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:42.499 [2024-11-19 14:18:40.810196] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.499 [2024-11-19 14:18:40.810239] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:42.499 [2024-11-19 14:18:40.810252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.403 ms 00:17:42.499 [2024-11-19 14:18:40.810259] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.499 [2024-11-19 14:18:40.810334] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.499 [2024-11-19 14:18:40.810344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:42.499 [2024-11-19 14:18:40.810353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:42.499 [2024-11-19 14:18:40.810361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.499 [2024-11-19 14:18:40.810430] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.499 [2024-11-19 14:18:40.810440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:42.499 [2024-11-19 14:18:40.810449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:42.499 [2024-11-19 14:18:40.810457] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.499 [2024-11-19 14:18:40.811826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.499 [2024-11-19 14:18:40.811873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:42.499 [2024-11-19 14:18:40.811904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.350 ms 00:17:42.499 [2024-11-19 14:18:40.811913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.499 [2024-11-19 14:18:40.811946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.499 [2024-11-19 14:18:40.811955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:42.499 [2024-11-19 14:18:40.811964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:42.499 [2024-11-19 14:18:40.811977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.499 [2024-11-19 14:18:40.812013] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:42.499 [2024-11-19 14:18:40.812023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.499 [2024-11-19 14:18:40.812031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:42.499 [2024-11-19 14:18:40.812041] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:42.499 [2024-11-19 14:18:40.812049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.499 [2024-11-19 14:18:40.837709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.499 [2024-11-19 14:18:40.837756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:42.499 [2024-11-19 14:18:40.837769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.640 ms 00:17:42.499 [2024-11-19 14:18:40.837777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.499 [2024-11-19 14:18:40.837859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.499 [2024-11-19 14:18:40.837887] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:42.499 [2024-11-19 14:18:40.837897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:42.499 [2024-11-19 14:18:40.837906] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.499 [2024-11-19 14:18:40.839088] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 270.266 ms, result 0 00:17:43.444  [2024-11-19T14:18:42.947Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-19T14:18:43.890Z] Copying: 46/1024 [MB] (29 MBps) [2024-11-19T14:18:45.275Z] Copying: 64/1024 [MB] (17 MBps) [2024-11-19T14:18:46.215Z] Copying: 80/1024 [MB] (16 MBps) [2024-11-19T14:18:47.159Z] Copying: 95/1024 [MB] (14 MBps) [2024-11-19T14:18:48.102Z] Copying: 109/1024 [MB] (14 MBps) [2024-11-19T14:18:49.039Z] Copying: 121/1024 [MB] (12 MBps) [2024-11-19T14:18:49.978Z] Copying: 137/1024 [MB] (15 MBps) [2024-11-19T14:18:50.915Z] Copying: 156/1024 [MB] (19 MBps) [2024-11-19T14:18:52.288Z] Copying: 170/1024 [MB] (13 MBps) [2024-11-19T14:18:52.854Z] Copying: 185/1024 [MB] (15 MBps) [2024-11-19T14:18:54.227Z] Copying: 201/1024 [MB] (15 MBps) [2024-11-19T14:18:55.161Z] Copying: 217/1024 [MB] (15 MBps) [2024-11-19T14:18:56.094Z] Copying: 233/1024 [MB] (16 MBps) [2024-11-19T14:18:57.028Z] Copying: 249/1024 [MB] (16 MBps) [2024-11-19T14:18:57.966Z] Copying: 265/1024 [MB] (16 MBps) [2024-11-19T14:18:58.901Z] Copying: 277/1024 [MB] (11 MBps) [2024-11-19T14:19:00.281Z] Copying: 290/1024 [MB] (13 MBps) [2024-11-19T14:19:01.218Z] Copying: 301/1024 [MB] (10 MBps) [2024-11-19T14:19:02.159Z] Copying: 317/1024 [MB] (15 MBps) [2024-11-19T14:19:03.101Z] Copying: 333/1024 [MB] (15 MBps) [2024-11-19T14:19:04.037Z] Copying: 354/1024 [MB] (20 MBps) [2024-11-19T14:19:04.976Z] Copying: 367/1024 [MB] (13 MBps) [2024-11-19T14:19:05.916Z] Copying: 386/1024 [MB] (19 MBps) [2024-11-19T14:19:06.857Z] Copying: 402/1024 [MB] (16 MBps) [2024-11-19T14:19:08.241Z] Copying: 420/1024 [MB] (18 MBps) [2024-11-19T14:19:09.187Z] Copying: 449/1024 [MB] (28 MBps) [2024-11-19T14:19:10.130Z] Copying: 468/1024 [MB] (18 MBps) [2024-11-19T14:19:11.074Z] Copying: 478/1024 [MB] (10 MBps) [2024-11-19T14:19:12.019Z] Copying: 499/1024 [MB] (20 MBps) [2024-11-19T14:19:13.051Z] Copying: 516/1024 [MB] (17 MBps) [2024-11-19T14:19:13.996Z] Copying: 541/1024 [MB] (24 MBps) [2024-11-19T14:19:14.970Z] Copying: 557/1024 [MB] (16 MBps) [2024-11-19T14:19:15.910Z] Copying: 576/1024 [MB] (18 MBps) [2024-11-19T14:19:17.294Z] Copying: 597/1024 [MB] (20 MBps) [2024-11-19T14:19:17.867Z] Copying: 617/1024 [MB] (19 MBps) [2024-11-19T14:19:19.257Z] Copying: 630/1024 [MB] (13 MBps) [2024-11-19T14:19:20.200Z] Copying: 641/1024 [MB] (10 MBps) [2024-11-19T14:19:21.144Z] Copying: 653/1024 [MB] (12 MBps) [2024-11-19T14:19:22.084Z] Copying: 671/1024 [MB] (17 MBps) [2024-11-19T14:19:23.024Z] Copying: 688/1024 [MB] (16 MBps) [2024-11-19T14:19:23.969Z] Copying: 704/1024 [MB] (16 MBps) [2024-11-19T14:19:24.912Z] Copying: 726/1024 [MB] (21 MBps) [2024-11-19T14:19:25.856Z] Copying: 755/1024 [MB] (29 MBps) [2024-11-19T14:19:27.244Z] Copying: 773/1024 [MB] (18 MBps) [2024-11-19T14:19:28.188Z] Copying: 804/1024 [MB] (30 MBps) [2024-11-19T14:19:29.134Z] Copying: 827/1024 [MB] (23 MBps) [2024-11-19T14:19:30.078Z] Copying: 861/1024 [MB] (33 MBps) [2024-11-19T14:19:31.022Z] Copying: 874/1024 [MB] (12 MBps) [2024-11-19T14:19:31.966Z] Copying: 895/1024 [MB] (21 MBps) [2024-11-19T14:19:32.911Z] Copying: 917/1024 [MB] (21 MBps) [2024-11-19T14:19:33.854Z] Copying: 934/1024 [MB] (17 MBps) [2024-11-19T14:19:35.241Z] Copying: 960/1024 [MB] (25 MBps) [2024-11-19T14:19:36.186Z] Copying: 984/1024 [MB] (24 MBps) [2024-11-19T14:19:37.130Z] Copying: 1002/1024 [MB] (17 MBps) [2024-11-19T14:19:37.130Z] Copying: 1020/1024 [MB] (17 MBps) [2024-11-19T14:19:37.130Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-19 14:19:37.067148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.568 [2024-11-19 14:19:37.067184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:38.568 [2024-11-19 14:19:37.067194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:38.568 [2024-11-19 14:19:37.067201] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.568 [2024-11-19 14:19:37.067218] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:38.568 [2024-11-19 14:19:37.069337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.568 [2024-11-19 14:19:37.069363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:38.568 [2024-11-19 14:19:37.069376] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.108 ms 00:18:38.568 [2024-11-19 14:19:37.069383] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.568 [2024-11-19 14:19:37.071395] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.568 [2024-11-19 14:19:37.071423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:38.568 [2024-11-19 14:19:37.071431] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.995 ms 00:18:38.568 [2024-11-19 14:19:37.071436] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.568 [2024-11-19 14:19:37.084820] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.568 [2024-11-19 14:19:37.084846] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:38.568 [2024-11-19 14:19:37.084854] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.372 ms 00:18:38.568 [2024-11-19 14:19:37.084865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.568 [2024-11-19 14:19:37.089808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.568 [2024-11-19 14:19:37.089832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:38.568 [2024-11-19 14:19:37.089841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.913 ms 00:18:38.568 [2024-11-19 14:19:37.089848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.568 [2024-11-19 14:19:37.108540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.568 [2024-11-19 14:19:37.108566] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:38.568 [2024-11-19 14:19:37.108574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.641 ms 00:18:38.568 [2024-11-19 14:19:37.108580] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.568 [2024-11-19 14:19:37.120626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.568 [2024-11-19 14:19:37.120652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:38.568 [2024-11-19 14:19:37.120660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.019 ms 00:18:38.568 [2024-11-19 14:19:37.120667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.568 [2024-11-19 14:19:37.120765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.568 [2024-11-19 14:19:37.120773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:38.568 [2024-11-19 14:19:37.120779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:18:38.568 [2024-11-19 14:19:37.120785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.830 [2024-11-19 14:19:37.139717] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.830 [2024-11-19 14:19:37.139742] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:38.830 [2024-11-19 14:19:37.139749] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.921 ms 00:18:38.830 [2024-11-19 14:19:37.139755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.830 [2024-11-19 14:19:37.158299] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.830 [2024-11-19 14:19:37.158324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:38.830 [2024-11-19 14:19:37.158332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.519 ms 00:18:38.830 [2024-11-19 14:19:37.158344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.830 [2024-11-19 14:19:37.176332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.830 [2024-11-19 14:19:37.176363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:38.830 [2024-11-19 14:19:37.176371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.962 ms 00:18:38.830 [2024-11-19 14:19:37.176376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.830 [2024-11-19 14:19:37.194266] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.830 [2024-11-19 14:19:37.194291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:38.830 [2024-11-19 14:19:37.194298] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.839 ms 00:18:38.830 [2024-11-19 14:19:37.194303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.831 [2024-11-19 14:19:37.194328] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:38.831 [2024-11-19 14:19:37.194339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:38.831 [2024-11-19 14:19:37.194835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:38.832 [2024-11-19 14:19:37.194840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:38.832 [2024-11-19 14:19:37.194846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:38.832 [2024-11-19 14:19:37.194851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:38.832 [2024-11-19 14:19:37.194856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:38.832 [2024-11-19 14:19:37.194862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:38.832 [2024-11-19 14:19:37.194867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:38.832 [2024-11-19 14:19:37.194883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:38.832 [2024-11-19 14:19:37.194890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:38.832 [2024-11-19 14:19:37.194895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:38.832 [2024-11-19 14:19:37.194901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:38.832 [2024-11-19 14:19:37.194907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:38.832 [2024-11-19 14:19:37.194913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:38.832 [2024-11-19 14:19:37.194918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:38.832 [2024-11-19 14:19:37.194930] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:38.832 [2024-11-19 14:19:37.194936] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 75a07b1c-071a-49d1-8758-82829c9986d4 00:18:38.832 [2024-11-19 14:19:37.194942] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:38.832 [2024-11-19 14:19:37.194948] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:38.832 [2024-11-19 14:19:37.194953] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:38.832 [2024-11-19 14:19:37.194959] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:38.832 [2024-11-19 14:19:37.194964] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:38.832 [2024-11-19 14:19:37.194970] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:38.832 [2024-11-19 14:19:37.194975] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:38.832 [2024-11-19 14:19:37.194980] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:38.832 [2024-11-19 14:19:37.194990] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:38.832 [2024-11-19 14:19:37.194995] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.832 [2024-11-19 14:19:37.195000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:38.832 [2024-11-19 14:19:37.195007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.667 ms 00:18:38.832 [2024-11-19 14:19:37.195014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.832 [2024-11-19 14:19:37.204271] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.832 [2024-11-19 14:19:37.204295] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:38.832 [2024-11-19 14:19:37.204302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.238 ms 00:18:38.832 [2024-11-19 14:19:37.204308] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.832 [2024-11-19 14:19:37.204449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.832 [2024-11-19 14:19:37.204456] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:38.832 [2024-11-19 14:19:37.204466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:18:38.832 [2024-11-19 14:19:37.204471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.832 [2024-11-19 14:19:37.232260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.832 [2024-11-19 14:19:37.232285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:38.832 [2024-11-19 14:19:37.232292] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.832 [2024-11-19 14:19:37.232298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.832 [2024-11-19 14:19:37.232342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.832 [2024-11-19 14:19:37.232348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:38.832 [2024-11-19 14:19:37.232357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.832 [2024-11-19 14:19:37.232363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.832 [2024-11-19 14:19:37.232407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.832 [2024-11-19 14:19:37.232415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:38.832 [2024-11-19 14:19:37.232420] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.832 [2024-11-19 14:19:37.232425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.832 [2024-11-19 14:19:37.232436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.832 [2024-11-19 14:19:37.232442] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:38.832 [2024-11-19 14:19:37.232448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.832 [2024-11-19 14:19:37.232456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.832 [2024-11-19 14:19:37.289768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.832 [2024-11-19 14:19:37.289799] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:38.832 [2024-11-19 14:19:37.289806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.832 [2024-11-19 14:19:37.289812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.832 [2024-11-19 14:19:37.312009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.832 [2024-11-19 14:19:37.312035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:38.832 [2024-11-19 14:19:37.312042] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.832 [2024-11-19 14:19:37.312052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.832 [2024-11-19 14:19:37.312092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.832 [2024-11-19 14:19:37.312099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:38.832 [2024-11-19 14:19:37.312105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.832 [2024-11-19 14:19:37.312110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.832 [2024-11-19 14:19:37.312141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.832 [2024-11-19 14:19:37.312148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:38.832 [2024-11-19 14:19:37.312153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.832 [2024-11-19 14:19:37.312159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.832 [2024-11-19 14:19:37.312226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.832 [2024-11-19 14:19:37.312235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:38.832 [2024-11-19 14:19:37.312242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.832 [2024-11-19 14:19:37.312247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.832 [2024-11-19 14:19:37.312272] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.832 [2024-11-19 14:19:37.312278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:38.832 [2024-11-19 14:19:37.312285] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.832 [2024-11-19 14:19:37.312290] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.832 [2024-11-19 14:19:37.312320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.832 [2024-11-19 14:19:37.312327] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:38.832 [2024-11-19 14:19:37.312332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.832 [2024-11-19 14:19:37.312338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.832 [2024-11-19 14:19:37.312368] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.832 [2024-11-19 14:19:37.312375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:38.832 [2024-11-19 14:19:37.312381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.832 [2024-11-19 14:19:37.312386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.832 [2024-11-19 14:19:37.312475] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 245.305 ms, result 0 00:18:39.776 00:18:39.776 00:18:39.776 14:19:38 -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:18:39.776 [2024-11-19 14:19:38.110398] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:39.776 [2024-11-19 14:19:38.110658] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73889 ] 00:18:39.776 [2024-11-19 14:19:38.261160] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.037 [2024-11-19 14:19:38.402229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.299 [2024-11-19 14:19:38.606981] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:40.299 [2024-11-19 14:19:38.607026] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:40.299 [2024-11-19 14:19:38.752682] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.299 [2024-11-19 14:19:38.752721] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:40.299 [2024-11-19 14:19:38.752734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:40.299 [2024-11-19 14:19:38.752745] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.299 [2024-11-19 14:19:38.752794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.299 [2024-11-19 14:19:38.752805] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:40.299 [2024-11-19 14:19:38.752813] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:18:40.299 [2024-11-19 14:19:38.752821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.299 [2024-11-19 14:19:38.752837] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:40.299 [2024-11-19 14:19:38.753565] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:40.299 [2024-11-19 14:19:38.753582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.299 [2024-11-19 14:19:38.753590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:40.299 [2024-11-19 14:19:38.753598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.749 ms 00:18:40.299 [2024-11-19 14:19:38.753605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.299 [2024-11-19 14:19:38.754955] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:40.299 [2024-11-19 14:19:38.768254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.299 [2024-11-19 14:19:38.768286] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:40.299 [2024-11-19 14:19:38.768297] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.302 ms 00:18:40.299 [2024-11-19 14:19:38.768304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.299 [2024-11-19 14:19:38.768356] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.299 [2024-11-19 14:19:38.768366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:40.299 [2024-11-19 14:19:38.768374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:18:40.300 [2024-11-19 14:19:38.768380] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.300 [2024-11-19 14:19:38.774898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.300 [2024-11-19 14:19:38.774922] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:40.300 [2024-11-19 14:19:38.774930] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.443 ms 00:18:40.300 [2024-11-19 14:19:38.774938] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.300 [2024-11-19 14:19:38.775018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.300 [2024-11-19 14:19:38.775027] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:40.300 [2024-11-19 14:19:38.775035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:40.300 [2024-11-19 14:19:38.775042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.300 [2024-11-19 14:19:38.775090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.300 [2024-11-19 14:19:38.775099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:40.300 [2024-11-19 14:19:38.775107] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:40.300 [2024-11-19 14:19:38.775114] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.300 [2024-11-19 14:19:38.775142] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:40.300 [2024-11-19 14:19:38.778939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.300 [2024-11-19 14:19:38.778967] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:40.300 [2024-11-19 14:19:38.778981] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.805 ms 00:18:40.300 [2024-11-19 14:19:38.778988] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.300 [2024-11-19 14:19:38.779027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.300 [2024-11-19 14:19:38.779034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:40.300 [2024-11-19 14:19:38.779043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:40.300 [2024-11-19 14:19:38.779052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.300 [2024-11-19 14:19:38.779081] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:40.300 [2024-11-19 14:19:38.779100] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:40.300 [2024-11-19 14:19:38.779134] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:40.300 [2024-11-19 14:19:38.779149] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:40.300 [2024-11-19 14:19:38.779232] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:40.300 [2024-11-19 14:19:38.779242] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:40.300 [2024-11-19 14:19:38.779269] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:40.300 [2024-11-19 14:19:38.779280] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:40.300 [2024-11-19 14:19:38.779288] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:40.300 [2024-11-19 14:19:38.779297] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:40.300 [2024-11-19 14:19:38.779304] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:40.300 [2024-11-19 14:19:38.779311] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:40.300 [2024-11-19 14:19:38.779320] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:40.300 [2024-11-19 14:19:38.779328] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.300 [2024-11-19 14:19:38.779335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:40.300 [2024-11-19 14:19:38.779343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:18:40.300 [2024-11-19 14:19:38.779350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.300 [2024-11-19 14:19:38.779415] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.300 [2024-11-19 14:19:38.779424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:40.300 [2024-11-19 14:19:38.779431] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:18:40.300 [2024-11-19 14:19:38.779438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.300 [2024-11-19 14:19:38.779508] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:40.300 [2024-11-19 14:19:38.779518] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:40.300 [2024-11-19 14:19:38.779526] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:40.300 [2024-11-19 14:19:38.779534] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.300 [2024-11-19 14:19:38.779542] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:40.300 [2024-11-19 14:19:38.779549] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:40.300 [2024-11-19 14:19:38.779556] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:40.300 [2024-11-19 14:19:38.779564] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:40.300 [2024-11-19 14:19:38.779571] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:40.300 [2024-11-19 14:19:38.779579] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:40.300 [2024-11-19 14:19:38.779586] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:40.300 [2024-11-19 14:19:38.779592] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:40.300 [2024-11-19 14:19:38.779599] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:40.300 [2024-11-19 14:19:38.779608] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:40.300 [2024-11-19 14:19:38.779615] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:18:40.300 [2024-11-19 14:19:38.779621] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.300 [2024-11-19 14:19:38.779634] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:40.300 [2024-11-19 14:19:38.779641] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:18:40.300 [2024-11-19 14:19:38.779647] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.300 [2024-11-19 14:19:38.779653] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:40.300 [2024-11-19 14:19:38.779660] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:18:40.300 [2024-11-19 14:19:38.779667] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:40.300 [2024-11-19 14:19:38.779674] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:40.300 [2024-11-19 14:19:38.779680] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:40.300 [2024-11-19 14:19:38.779686] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:40.300 [2024-11-19 14:19:38.779693] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:40.300 [2024-11-19 14:19:38.779699] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:18:40.300 [2024-11-19 14:19:38.779705] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:40.300 [2024-11-19 14:19:38.779712] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:40.300 [2024-11-19 14:19:38.779719] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:40.300 [2024-11-19 14:19:38.779724] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:40.300 [2024-11-19 14:19:38.779730] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:40.300 [2024-11-19 14:19:38.779736] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:18:40.300 [2024-11-19 14:19:38.779743] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:40.300 [2024-11-19 14:19:38.779750] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:40.300 [2024-11-19 14:19:38.779756] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:40.300 [2024-11-19 14:19:38.779762] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:40.300 [2024-11-19 14:19:38.779769] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:40.300 [2024-11-19 14:19:38.779775] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:18:40.300 [2024-11-19 14:19:38.779781] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:40.300 [2024-11-19 14:19:38.779787] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:40.300 [2024-11-19 14:19:38.779798] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:40.300 [2024-11-19 14:19:38.779805] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:40.300 [2024-11-19 14:19:38.779812] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.300 [2024-11-19 14:19:38.779819] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:40.300 [2024-11-19 14:19:38.779827] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:40.300 [2024-11-19 14:19:38.779834] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:40.300 [2024-11-19 14:19:38.779841] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:40.300 [2024-11-19 14:19:38.779848] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:40.300 [2024-11-19 14:19:38.779854] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:40.301 [2024-11-19 14:19:38.779862] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:40.301 [2024-11-19 14:19:38.779871] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:40.301 [2024-11-19 14:19:38.779895] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:40.301 [2024-11-19 14:19:38.779902] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:18:40.301 [2024-11-19 14:19:38.779910] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:18:40.301 [2024-11-19 14:19:38.779917] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:18:40.301 [2024-11-19 14:19:38.779924] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:18:40.301 [2024-11-19 14:19:38.779931] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:18:40.301 [2024-11-19 14:19:38.779938] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:18:40.301 [2024-11-19 14:19:38.779945] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:18:40.301 [2024-11-19 14:19:38.779953] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:18:40.301 [2024-11-19 14:19:38.779960] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:18:40.301 [2024-11-19 14:19:38.779967] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:18:40.301 [2024-11-19 14:19:38.779974] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:18:40.301 [2024-11-19 14:19:38.779982] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:18:40.301 [2024-11-19 14:19:38.779988] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:40.301 [2024-11-19 14:19:38.779996] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:40.301 [2024-11-19 14:19:38.780005] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:40.301 [2024-11-19 14:19:38.780012] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:40.301 [2024-11-19 14:19:38.780019] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:40.301 [2024-11-19 14:19:38.780027] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:40.301 [2024-11-19 14:19:38.780035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.301 [2024-11-19 14:19:38.780043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:40.301 [2024-11-19 14:19:38.780050] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:18:40.301 [2024-11-19 14:19:38.780058] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.301 [2024-11-19 14:19:38.796616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.301 [2024-11-19 14:19:38.796644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:40.301 [2024-11-19 14:19:38.796656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.521 ms 00:18:40.301 [2024-11-19 14:19:38.796668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.301 [2024-11-19 14:19:38.796753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.301 [2024-11-19 14:19:38.796762] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:40.301 [2024-11-19 14:19:38.796771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:40.301 [2024-11-19 14:19:38.796779] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.301 [2024-11-19 14:19:38.841612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.301 [2024-11-19 14:19:38.841649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:40.301 [2024-11-19 14:19:38.841661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.791 ms 00:18:40.301 [2024-11-19 14:19:38.841668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.301 [2024-11-19 14:19:38.841708] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.301 [2024-11-19 14:19:38.841717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:40.301 [2024-11-19 14:19:38.841725] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:40.301 [2024-11-19 14:19:38.841733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.301 [2024-11-19 14:19:38.842191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.301 [2024-11-19 14:19:38.842215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:40.301 [2024-11-19 14:19:38.842225] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:18:40.301 [2024-11-19 14:19:38.842237] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.301 [2024-11-19 14:19:38.842354] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.301 [2024-11-19 14:19:38.842363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:40.301 [2024-11-19 14:19:38.842371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:18:40.301 [2024-11-19 14:19:38.842379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.301 [2024-11-19 14:19:38.857635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.301 [2024-11-19 14:19:38.857662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:40.301 [2024-11-19 14:19:38.857672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.236 ms 00:18:40.301 [2024-11-19 14:19:38.857679] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.563 [2024-11-19 14:19:38.871324] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:40.563 [2024-11-19 14:19:38.871354] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:40.563 [2024-11-19 14:19:38.871365] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.563 [2024-11-19 14:19:38.871374] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:40.563 [2024-11-19 14:19:38.871385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.599 ms 00:18:40.563 [2024-11-19 14:19:38.871393] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.563 [2024-11-19 14:19:38.896385] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.563 [2024-11-19 14:19:38.896414] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:40.563 [2024-11-19 14:19:38.896425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.954 ms 00:18:40.563 [2024-11-19 14:19:38.896432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.563 [2024-11-19 14:19:38.908445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.563 [2024-11-19 14:19:38.908470] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:40.563 [2024-11-19 14:19:38.908479] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.976 ms 00:18:40.563 [2024-11-19 14:19:38.908487] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.563 [2024-11-19 14:19:38.920195] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.563 [2024-11-19 14:19:38.920226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:40.563 [2024-11-19 14:19:38.920236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.677 ms 00:18:40.564 [2024-11-19 14:19:38.920242] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.564 [2024-11-19 14:19:38.920596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.564 [2024-11-19 14:19:38.920614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:40.564 [2024-11-19 14:19:38.920624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:18:40.564 [2024-11-19 14:19:38.920631] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.564 [2024-11-19 14:19:38.982275] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.564 [2024-11-19 14:19:38.982310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:40.564 [2024-11-19 14:19:38.982324] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.627 ms 00:18:40.564 [2024-11-19 14:19:38.982332] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.564 [2024-11-19 14:19:38.993245] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:40.564 [2024-11-19 14:19:38.996161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.564 [2024-11-19 14:19:38.996187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:40.564 [2024-11-19 14:19:38.996199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.782 ms 00:18:40.564 [2024-11-19 14:19:38.996211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.564 [2024-11-19 14:19:38.996269] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.564 [2024-11-19 14:19:38.996279] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:40.564 [2024-11-19 14:19:38.996288] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:40.564 [2024-11-19 14:19:38.996296] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.564 [2024-11-19 14:19:38.996362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.564 [2024-11-19 14:19:38.996372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:40.564 [2024-11-19 14:19:38.996380] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:40.564 [2024-11-19 14:19:38.996388] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.564 [2024-11-19 14:19:38.997670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.564 [2024-11-19 14:19:38.997698] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:40.564 [2024-11-19 14:19:38.997707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.263 ms 00:18:40.564 [2024-11-19 14:19:38.997714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.564 [2024-11-19 14:19:38.997745] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.564 [2024-11-19 14:19:38.997753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:40.564 [2024-11-19 14:19:38.997767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:40.564 [2024-11-19 14:19:38.997774] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.564 [2024-11-19 14:19:38.997810] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:40.564 [2024-11-19 14:19:38.997821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.564 [2024-11-19 14:19:38.997831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:40.564 [2024-11-19 14:19:38.997839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:40.564 [2024-11-19 14:19:38.997847] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.564 [2024-11-19 14:19:39.022297] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.564 [2024-11-19 14:19:39.022332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:40.564 [2024-11-19 14:19:39.022343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.431 ms 00:18:40.564 [2024-11-19 14:19:39.022350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.564 [2024-11-19 14:19:39.022429] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.564 [2024-11-19 14:19:39.022439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:40.564 [2024-11-19 14:19:39.022447] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:40.564 [2024-11-19 14:19:39.022456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.564 [2024-11-19 14:19:39.024005] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 270.839 ms, result 0 00:18:41.972  [2024-11-19T14:19:41.481Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-19T14:19:42.425Z] Copying: 35/1024 [MB] (15 MBps) [2024-11-19T14:19:43.364Z] Copying: 50/1024 [MB] (15 MBps) [2024-11-19T14:19:44.304Z] Copying: 67/1024 [MB] (16 MBps) [2024-11-19T14:19:45.247Z] Copying: 87/1024 [MB] (19 MBps) [2024-11-19T14:19:46.630Z] Copying: 106/1024 [MB] (18 MBps) [2024-11-19T14:19:47.572Z] Copying: 125/1024 [MB] (19 MBps) [2024-11-19T14:19:48.512Z] Copying: 147/1024 [MB] (21 MBps) [2024-11-19T14:19:49.456Z] Copying: 168/1024 [MB] (21 MBps) [2024-11-19T14:19:50.400Z] Copying: 187/1024 [MB] (18 MBps) [2024-11-19T14:19:51.336Z] Copying: 206/1024 [MB] (19 MBps) [2024-11-19T14:19:52.268Z] Copying: 220/1024 [MB] (14 MBps) [2024-11-19T14:19:53.645Z] Copying: 236/1024 [MB] (16 MBps) [2024-11-19T14:19:54.212Z] Copying: 253/1024 [MB] (16 MBps) [2024-11-19T14:19:55.592Z] Copying: 265/1024 [MB] (12 MBps) [2024-11-19T14:19:56.536Z] Copying: 277/1024 [MB] (12 MBps) [2024-11-19T14:19:57.480Z] Copying: 290/1024 [MB] (13 MBps) [2024-11-19T14:19:58.418Z] Copying: 301/1024 [MB] (10 MBps) [2024-11-19T14:19:59.352Z] Copying: 313/1024 [MB] (12 MBps) [2024-11-19T14:20:00.284Z] Copying: 329/1024 [MB] (16 MBps) [2024-11-19T14:20:01.220Z] Copying: 345/1024 [MB] (15 MBps) [2024-11-19T14:20:02.607Z] Copying: 359/1024 [MB] (13 MBps) [2024-11-19T14:20:03.548Z] Copying: 369/1024 [MB] (10 MBps) [2024-11-19T14:20:04.490Z] Copying: 380/1024 [MB] (10 MBps) [2024-11-19T14:20:05.429Z] Copying: 399/1024 [MB] (18 MBps) [2024-11-19T14:20:06.369Z] Copying: 414/1024 [MB] (15 MBps) [2024-11-19T14:20:07.307Z] Copying: 432/1024 [MB] (17 MBps) [2024-11-19T14:20:08.244Z] Copying: 448/1024 [MB] (16 MBps) [2024-11-19T14:20:09.625Z] Copying: 460/1024 [MB] (11 MBps) [2024-11-19T14:20:10.567Z] Copying: 472/1024 [MB] (12 MBps) [2024-11-19T14:20:11.511Z] Copying: 484/1024 [MB] (11 MBps) [2024-11-19T14:20:12.449Z] Copying: 499/1024 [MB] (15 MBps) [2024-11-19T14:20:13.494Z] Copying: 517/1024 [MB] (17 MBps) [2024-11-19T14:20:14.447Z] Copying: 534/1024 [MB] (16 MBps) [2024-11-19T14:20:15.388Z] Copying: 548/1024 [MB] (14 MBps) [2024-11-19T14:20:16.330Z] Copying: 566/1024 [MB] (18 MBps) [2024-11-19T14:20:17.270Z] Copying: 589/1024 [MB] (22 MBps) [2024-11-19T14:20:18.213Z] Copying: 608/1024 [MB] (19 MBps) [2024-11-19T14:20:19.596Z] Copying: 624/1024 [MB] (16 MBps) [2024-11-19T14:20:20.539Z] Copying: 639/1024 [MB] (14 MBps) [2024-11-19T14:20:21.485Z] Copying: 655/1024 [MB] (16 MBps) [2024-11-19T14:20:22.430Z] Copying: 676/1024 [MB] (21 MBps) [2024-11-19T14:20:23.376Z] Copying: 698/1024 [MB] (21 MBps) [2024-11-19T14:20:24.319Z] Copying: 717/1024 [MB] (19 MBps) [2024-11-19T14:20:25.262Z] Copying: 729/1024 [MB] (11 MBps) [2024-11-19T14:20:26.647Z] Copying: 741/1024 [MB] (11 MBps) [2024-11-19T14:20:27.217Z] Copying: 753/1024 [MB] (11 MBps) [2024-11-19T14:20:28.604Z] Copying: 765/1024 [MB] (11 MBps) [2024-11-19T14:20:29.546Z] Copying: 779/1024 [MB] (14 MBps) [2024-11-19T14:20:30.491Z] Copying: 791/1024 [MB] (11 MBps) [2024-11-19T14:20:31.434Z] Copying: 805/1024 [MB] (13 MBps) [2024-11-19T14:20:32.379Z] Copying: 815/1024 [MB] (10 MBps) [2024-11-19T14:20:33.323Z] Copying: 826/1024 [MB] (10 MBps) [2024-11-19T14:20:34.266Z] Copying: 837/1024 [MB] (11 MBps) [2024-11-19T14:20:35.654Z] Copying: 853/1024 [MB] (15 MBps) [2024-11-19T14:20:36.227Z] Copying: 866/1024 [MB] (13 MBps) [2024-11-19T14:20:37.614Z] Copying: 888/1024 [MB] (21 MBps) [2024-11-19T14:20:38.559Z] Copying: 905/1024 [MB] (17 MBps) [2024-11-19T14:20:39.504Z] Copying: 920/1024 [MB] (14 MBps) [2024-11-19T14:20:40.449Z] Copying: 942/1024 [MB] (21 MBps) [2024-11-19T14:20:41.392Z] Copying: 964/1024 [MB] (22 MBps) [2024-11-19T14:20:42.337Z] Copying: 986/1024 [MB] (21 MBps) [2024-11-19T14:20:43.284Z] Copying: 1001/1024 [MB] (14 MBps) [2024-11-19T14:20:43.545Z] Copying: 1022/1024 [MB] (20 MBps) [2024-11-19T14:20:44.561Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-19 14:20:44.248487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.999 [2024-11-19 14:20:44.248804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:45.999 [2024-11-19 14:20:44.248831] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:45.999 [2024-11-19 14:20:44.248842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.999 [2024-11-19 14:20:44.248898] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:45.999 [2024-11-19 14:20:44.251885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.999 [2024-11-19 14:20:44.251935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:45.999 [2024-11-19 14:20:44.251947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.958 ms 00:19:45.999 [2024-11-19 14:20:44.251955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.999 [2024-11-19 14:20:44.252202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.999 [2024-11-19 14:20:44.252213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:45.999 [2024-11-19 14:20:44.252223] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:19:45.999 [2024-11-19 14:20:44.252231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.999 [2024-11-19 14:20:44.255711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.999 [2024-11-19 14:20:44.255860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:45.999 [2024-11-19 14:20:44.255895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.464 ms 00:19:45.999 [2024-11-19 14:20:44.255905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.999 [2024-11-19 14:20:44.262849] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.999 [2024-11-19 14:20:44.263014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:45.999 [2024-11-19 14:20:44.263034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.902 ms 00:19:45.999 [2024-11-19 14:20:44.263044] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.999 [2024-11-19 14:20:44.291156] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.999 [2024-11-19 14:20:44.291201] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:45.999 [2024-11-19 14:20:44.291213] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.038 ms 00:19:45.999 [2024-11-19 14:20:44.291221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.999 [2024-11-19 14:20:44.308908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.999 [2024-11-19 14:20:44.308957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:45.999 [2024-11-19 14:20:44.308969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.642 ms 00:19:45.999 [2024-11-19 14:20:44.308983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.999 [2024-11-19 14:20:44.309140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.999 [2024-11-19 14:20:44.309152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:45.999 [2024-11-19 14:20:44.309161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:19:45.999 [2024-11-19 14:20:44.309169] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.999 [2024-11-19 14:20:44.335067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.999 [2024-11-19 14:20:44.335111] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:45.999 [2024-11-19 14:20:44.335122] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.882 ms 00:19:45.999 [2024-11-19 14:20:44.335129] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.999 [2024-11-19 14:20:44.360310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.999 [2024-11-19 14:20:44.360361] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:45.999 [2024-11-19 14:20:44.360385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.136 ms 00:19:45.999 [2024-11-19 14:20:44.360392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.999 [2024-11-19 14:20:44.384781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.000 [2024-11-19 14:20:44.384826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:46.000 [2024-11-19 14:20:44.384837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.346 ms 00:19:46.000 [2024-11-19 14:20:44.384844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.000 [2024-11-19 14:20:44.409421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.000 [2024-11-19 14:20:44.409462] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:46.000 [2024-11-19 14:20:44.409474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.482 ms 00:19:46.000 [2024-11-19 14:20:44.409481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.000 [2024-11-19 14:20:44.409524] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:46.000 [2024-11-19 14:20:44.409546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.409999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.410007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.410015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.410023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.410031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.410041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.410049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.410057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.410066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.410074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.410081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.410089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.410096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.410104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.410112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.410119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.410127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.410134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.410141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.410149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.410156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.410163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.410171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.410179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.410187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.410194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.410201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.410209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.410216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-19 14:20:44.410223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:46.001 [2024-11-19 14:20:44.410231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:46.001 [2024-11-19 14:20:44.410241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:46.001 [2024-11-19 14:20:44.410248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:46.001 [2024-11-19 14:20:44.410256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:46.001 [2024-11-19 14:20:44.410263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:46.001 [2024-11-19 14:20:44.410270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:46.001 [2024-11-19 14:20:44.410278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:46.001 [2024-11-19 14:20:44.410285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:46.001 [2024-11-19 14:20:44.410293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:46.001 [2024-11-19 14:20:44.410301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:46.001 [2024-11-19 14:20:44.410310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:46.001 [2024-11-19 14:20:44.410318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:46.001 [2024-11-19 14:20:44.410327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:46.001 [2024-11-19 14:20:44.410334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:46.001 [2024-11-19 14:20:44.410342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:46.001 [2024-11-19 14:20:44.410349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:46.001 [2024-11-19 14:20:44.410357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:46.001 [2024-11-19 14:20:44.410373] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:46.001 [2024-11-19 14:20:44.410381] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 75a07b1c-071a-49d1-8758-82829c9986d4 00:19:46.001 [2024-11-19 14:20:44.410389] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:46.001 [2024-11-19 14:20:44.410397] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:46.001 [2024-11-19 14:20:44.410405] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:46.001 [2024-11-19 14:20:44.410412] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:46.001 [2024-11-19 14:20:44.410420] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:46.001 [2024-11-19 14:20:44.410428] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:46.001 [2024-11-19 14:20:44.410436] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:46.001 [2024-11-19 14:20:44.410450] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:46.001 [2024-11-19 14:20:44.410457] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:46.001 [2024-11-19 14:20:44.410464] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.001 [2024-11-19 14:20:44.410471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:46.001 [2024-11-19 14:20:44.410483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.941 ms 00:19:46.001 [2024-11-19 14:20:44.410490] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.001 [2024-11-19 14:20:44.424615] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.001 [2024-11-19 14:20:44.424777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:46.001 [2024-11-19 14:20:44.424843] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.090 ms 00:19:46.001 [2024-11-19 14:20:44.424921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.001 [2024-11-19 14:20:44.425181] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.001 [2024-11-19 14:20:44.425220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:46.001 [2024-11-19 14:20:44.425297] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:19:46.001 [2024-11-19 14:20:44.425320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.001 [2024-11-19 14:20:44.464439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.001 [2024-11-19 14:20:44.464610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:46.001 [2024-11-19 14:20:44.464668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.001 [2024-11-19 14:20:44.464691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.001 [2024-11-19 14:20:44.464775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.001 [2024-11-19 14:20:44.464804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:46.001 [2024-11-19 14:20:44.464824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.001 [2024-11-19 14:20:44.464843] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.001 [2024-11-19 14:20:44.464950] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.001 [2024-11-19 14:20:44.464979] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:46.001 [2024-11-19 14:20:44.464999] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.001 [2024-11-19 14:20:44.465067] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.001 [2024-11-19 14:20:44.465100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.001 [2024-11-19 14:20:44.465121] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:46.001 [2024-11-19 14:20:44.465146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.001 [2024-11-19 14:20:44.465229] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.001 [2024-11-19 14:20:44.544336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.001 [2024-11-19 14:20:44.544519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:46.001 [2024-11-19 14:20:44.544580] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.001 [2024-11-19 14:20:44.544604] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.263 [2024-11-19 14:20:44.575810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.263 [2024-11-19 14:20:44.575853] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:46.263 [2024-11-19 14:20:44.575870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.263 [2024-11-19 14:20:44.575906] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.263 [2024-11-19 14:20:44.575971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.263 [2024-11-19 14:20:44.575981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:46.263 [2024-11-19 14:20:44.575990] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.263 [2024-11-19 14:20:44.575999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.263 [2024-11-19 14:20:44.576042] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.263 [2024-11-19 14:20:44.576052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:46.263 [2024-11-19 14:20:44.576060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.263 [2024-11-19 14:20:44.576072] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.263 [2024-11-19 14:20:44.576173] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.263 [2024-11-19 14:20:44.576183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:46.263 [2024-11-19 14:20:44.576192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.263 [2024-11-19 14:20:44.576201] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.263 [2024-11-19 14:20:44.576232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.263 [2024-11-19 14:20:44.576242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:46.263 [2024-11-19 14:20:44.576251] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.263 [2024-11-19 14:20:44.576259] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.263 [2024-11-19 14:20:44.576304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.263 [2024-11-19 14:20:44.576314] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:46.263 [2024-11-19 14:20:44.576322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.263 [2024-11-19 14:20:44.576329] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.263 [2024-11-19 14:20:44.576380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.263 [2024-11-19 14:20:44.576396] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:46.263 [2024-11-19 14:20:44.576405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.263 [2024-11-19 14:20:44.576415] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.263 [2024-11-19 14:20:44.576547] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 328.031 ms, result 0 00:19:47.207 00:19:47.207 00:19:47.207 14:20:45 -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:49.122 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:19:49.122 14:20:47 -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:19:49.122 [2024-11-19 14:20:47.566497] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:49.122 [2024-11-19 14:20:47.566602] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74606 ] 00:19:49.383 [2024-11-19 14:20:47.711965] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.383 [2024-11-19 14:20:47.930999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.957 [2024-11-19 14:20:48.215821] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:49.957 [2024-11-19 14:20:48.215924] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:49.957 [2024-11-19 14:20:48.372301] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.957 [2024-11-19 14:20:48.372544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:49.957 [2024-11-19 14:20:48.372571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:49.957 [2024-11-19 14:20:48.372585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.957 [2024-11-19 14:20:48.372654] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.957 [2024-11-19 14:20:48.372664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:49.957 [2024-11-19 14:20:48.372673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:19:49.957 [2024-11-19 14:20:48.372681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.958 [2024-11-19 14:20:48.372704] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:49.958 [2024-11-19 14:20:48.373603] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:49.958 [2024-11-19 14:20:48.373659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.958 [2024-11-19 14:20:48.373667] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:49.958 [2024-11-19 14:20:48.373677] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.961 ms 00:19:49.958 [2024-11-19 14:20:48.373685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.958 [2024-11-19 14:20:48.375534] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:49.958 [2024-11-19 14:20:48.390257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.958 [2024-11-19 14:20:48.390467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:49.958 [2024-11-19 14:20:48.390667] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.725 ms 00:19:49.958 [2024-11-19 14:20:48.390681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.958 [2024-11-19 14:20:48.390749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.958 [2024-11-19 14:20:48.390760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:49.958 [2024-11-19 14:20:48.390770] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:49.958 [2024-11-19 14:20:48.390777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.958 [2024-11-19 14:20:48.399038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.958 [2024-11-19 14:20:48.399082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:49.958 [2024-11-19 14:20:48.399093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.178 ms 00:19:49.958 [2024-11-19 14:20:48.399101] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.958 [2024-11-19 14:20:48.399200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.958 [2024-11-19 14:20:48.399210] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:49.958 [2024-11-19 14:20:48.399220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:19:49.958 [2024-11-19 14:20:48.399228] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.958 [2024-11-19 14:20:48.399274] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.958 [2024-11-19 14:20:48.399283] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:49.958 [2024-11-19 14:20:48.399292] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:49.958 [2024-11-19 14:20:48.399300] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.958 [2024-11-19 14:20:48.399345] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:49.958 [2024-11-19 14:20:48.403562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.958 [2024-11-19 14:20:48.403598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:49.958 [2024-11-19 14:20:48.403609] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.232 ms 00:19:49.958 [2024-11-19 14:20:48.403617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.958 [2024-11-19 14:20:48.403658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.958 [2024-11-19 14:20:48.403666] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:49.958 [2024-11-19 14:20:48.403675] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:49.958 [2024-11-19 14:20:48.403685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.958 [2024-11-19 14:20:48.403737] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:49.958 [2024-11-19 14:20:48.403761] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:49.958 [2024-11-19 14:20:48.403795] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:49.958 [2024-11-19 14:20:48.403811] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:49.958 [2024-11-19 14:20:48.403909] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:49.958 [2024-11-19 14:20:48.403921] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:49.958 [2024-11-19 14:20:48.403935] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:49.958 [2024-11-19 14:20:48.403952] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:49.958 [2024-11-19 14:20:48.403961] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:49.958 [2024-11-19 14:20:48.403970] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:49.958 [2024-11-19 14:20:48.403977] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:49.958 [2024-11-19 14:20:48.403984] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:49.958 [2024-11-19 14:20:48.403992] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:49.958 [2024-11-19 14:20:48.404000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.958 [2024-11-19 14:20:48.404008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:49.958 [2024-11-19 14:20:48.404016] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:19:49.958 [2024-11-19 14:20:48.404023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.958 [2024-11-19 14:20:48.404087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.958 [2024-11-19 14:20:48.404096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:49.958 [2024-11-19 14:20:48.404103] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:19:49.958 [2024-11-19 14:20:48.404111] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.958 [2024-11-19 14:20:48.404183] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:49.958 [2024-11-19 14:20:48.404195] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:49.958 [2024-11-19 14:20:48.404203] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:49.958 [2024-11-19 14:20:48.404211] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.958 [2024-11-19 14:20:48.404218] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:49.958 [2024-11-19 14:20:48.404225] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:49.958 [2024-11-19 14:20:48.404231] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:49.958 [2024-11-19 14:20:48.404238] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:49.958 [2024-11-19 14:20:48.404248] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:49.958 [2024-11-19 14:20:48.404255] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:49.958 [2024-11-19 14:20:48.404262] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:49.958 [2024-11-19 14:20:48.404269] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:49.958 [2024-11-19 14:20:48.404277] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:49.958 [2024-11-19 14:20:48.404284] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:49.958 [2024-11-19 14:20:48.404291] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:19:49.958 [2024-11-19 14:20:48.404298] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.958 [2024-11-19 14:20:48.404312] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:49.958 [2024-11-19 14:20:48.404318] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:19:49.958 [2024-11-19 14:20:48.404325] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.958 [2024-11-19 14:20:48.404331] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:49.958 [2024-11-19 14:20:48.404338] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:19:49.958 [2024-11-19 14:20:48.404345] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:49.958 [2024-11-19 14:20:48.404351] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:49.958 [2024-11-19 14:20:48.404358] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:49.958 [2024-11-19 14:20:48.404365] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:49.958 [2024-11-19 14:20:48.404372] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:49.958 [2024-11-19 14:20:48.404378] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:19:49.958 [2024-11-19 14:20:48.404384] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:49.958 [2024-11-19 14:20:48.404390] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:49.958 [2024-11-19 14:20:48.404397] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:49.958 [2024-11-19 14:20:48.404403] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:49.958 [2024-11-19 14:20:48.404409] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:49.958 [2024-11-19 14:20:48.404416] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:19:49.958 [2024-11-19 14:20:48.404422] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:49.958 [2024-11-19 14:20:48.404428] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:49.958 [2024-11-19 14:20:48.404434] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:49.958 [2024-11-19 14:20:48.404440] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:49.958 [2024-11-19 14:20:48.404447] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:49.958 [2024-11-19 14:20:48.404453] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:19:49.958 [2024-11-19 14:20:48.404459] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:49.958 [2024-11-19 14:20:48.404465] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:49.958 [2024-11-19 14:20:48.404475] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:49.958 [2024-11-19 14:20:48.404483] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:49.958 [2024-11-19 14:20:48.404490] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.958 [2024-11-19 14:20:48.404501] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:49.958 [2024-11-19 14:20:48.404508] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:49.958 [2024-11-19 14:20:48.404515] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:49.959 [2024-11-19 14:20:48.404523] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:49.959 [2024-11-19 14:20:48.404529] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:49.959 [2024-11-19 14:20:48.404536] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:49.959 [2024-11-19 14:20:48.404544] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:49.959 [2024-11-19 14:20:48.404554] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:49.959 [2024-11-19 14:20:48.404562] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:49.959 [2024-11-19 14:20:48.404569] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:19:49.959 [2024-11-19 14:20:48.404576] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:19:49.959 [2024-11-19 14:20:48.404584] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:19:49.959 [2024-11-19 14:20:48.404591] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:19:49.959 [2024-11-19 14:20:48.404598] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:19:49.959 [2024-11-19 14:20:48.404604] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:19:49.959 [2024-11-19 14:20:48.404611] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:19:49.959 [2024-11-19 14:20:48.404618] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:19:49.959 [2024-11-19 14:20:48.404625] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:19:49.959 [2024-11-19 14:20:48.404632] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:19:49.959 [2024-11-19 14:20:48.404639] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:19:49.959 [2024-11-19 14:20:48.404646] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:19:49.959 [2024-11-19 14:20:48.404653] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:49.959 [2024-11-19 14:20:48.404661] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:49.959 [2024-11-19 14:20:48.404669] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:49.959 [2024-11-19 14:20:48.404676] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:49.959 [2024-11-19 14:20:48.404684] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:49.959 [2024-11-19 14:20:48.404691] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:49.959 [2024-11-19 14:20:48.404701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.959 [2024-11-19 14:20:48.404709] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:49.959 [2024-11-19 14:20:48.404716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.561 ms 00:19:49.959 [2024-11-19 14:20:48.404723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.959 [2024-11-19 14:20:48.422528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.959 [2024-11-19 14:20:48.422694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:49.959 [2024-11-19 14:20:48.422712] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.759 ms 00:19:49.959 [2024-11-19 14:20:48.422728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.959 [2024-11-19 14:20:48.422822] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.959 [2024-11-19 14:20:48.422830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:49.959 [2024-11-19 14:20:48.422838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:49.959 [2024-11-19 14:20:48.422846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.959 [2024-11-19 14:20:48.466511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.959 [2024-11-19 14:20:48.466570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:49.959 [2024-11-19 14:20:48.466583] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.591 ms 00:19:49.959 [2024-11-19 14:20:48.466592] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.959 [2024-11-19 14:20:48.466642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.959 [2024-11-19 14:20:48.466652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:49.959 [2024-11-19 14:20:48.466661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:49.959 [2024-11-19 14:20:48.466668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.959 [2024-11-19 14:20:48.467287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.959 [2024-11-19 14:20:48.467337] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:49.959 [2024-11-19 14:20:48.467349] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:19:49.959 [2024-11-19 14:20:48.467364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.959 [2024-11-19 14:20:48.467493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.959 [2024-11-19 14:20:48.467503] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:49.959 [2024-11-19 14:20:48.467512] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:19:49.959 [2024-11-19 14:20:48.467519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.959 [2024-11-19 14:20:48.484199] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.959 [2024-11-19 14:20:48.484242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:49.959 [2024-11-19 14:20:48.484253] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.655 ms 00:19:49.959 [2024-11-19 14:20:48.484260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.959 [2024-11-19 14:20:48.498678] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:49.959 [2024-11-19 14:20:48.498723] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:49.959 [2024-11-19 14:20:48.498735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.959 [2024-11-19 14:20:48.498745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:49.959 [2024-11-19 14:20:48.498754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.361 ms 00:19:49.959 [2024-11-19 14:20:48.498761] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.221 [2024-11-19 14:20:48.524976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.221 [2024-11-19 14:20:48.525179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:50.221 [2024-11-19 14:20:48.525201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.159 ms 00:19:50.221 [2024-11-19 14:20:48.525210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.221 [2024-11-19 14:20:48.538609] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.221 [2024-11-19 14:20:48.538654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:50.221 [2024-11-19 14:20:48.538667] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.360 ms 00:19:50.221 [2024-11-19 14:20:48.538675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.221 [2024-11-19 14:20:48.551352] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.221 [2024-11-19 14:20:48.551405] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:50.221 [2024-11-19 14:20:48.551416] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.629 ms 00:19:50.221 [2024-11-19 14:20:48.551423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.221 [2024-11-19 14:20:48.551817] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.221 [2024-11-19 14:20:48.551830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:50.221 [2024-11-19 14:20:48.551839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:19:50.221 [2024-11-19 14:20:48.551846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.221 [2024-11-19 14:20:48.620059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.221 [2024-11-19 14:20:48.620117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:50.221 [2024-11-19 14:20:48.620134] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.194 ms 00:19:50.221 [2024-11-19 14:20:48.620142] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.221 [2024-11-19 14:20:48.631469] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:50.221 [2024-11-19 14:20:48.634805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.221 [2024-11-19 14:20:48.634850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:50.221 [2024-11-19 14:20:48.634862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.600 ms 00:19:50.221 [2024-11-19 14:20:48.634894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.221 [2024-11-19 14:20:48.634968] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.221 [2024-11-19 14:20:48.634978] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:50.221 [2024-11-19 14:20:48.634987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:50.221 [2024-11-19 14:20:48.634995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.221 [2024-11-19 14:20:48.635060] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.221 [2024-11-19 14:20:48.635072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:50.221 [2024-11-19 14:20:48.635080] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:50.221 [2024-11-19 14:20:48.635089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.221 [2024-11-19 14:20:48.636477] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.221 [2024-11-19 14:20:48.636523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:50.221 [2024-11-19 14:20:48.636535] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.366 ms 00:19:50.221 [2024-11-19 14:20:48.636542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.221 [2024-11-19 14:20:48.636580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.221 [2024-11-19 14:20:48.636588] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:50.221 [2024-11-19 14:20:48.636602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:50.222 [2024-11-19 14:20:48.636610] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.222 [2024-11-19 14:20:48.636646] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:50.222 [2024-11-19 14:20:48.636657] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.222 [2024-11-19 14:20:48.636669] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:50.222 [2024-11-19 14:20:48.636677] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:50.222 [2024-11-19 14:20:48.636685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.222 [2024-11-19 14:20:48.662457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.222 [2024-11-19 14:20:48.662504] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:50.222 [2024-11-19 14:20:48.662517] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.752 ms 00:19:50.222 [2024-11-19 14:20:48.662526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.222 [2024-11-19 14:20:48.662618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.222 [2024-11-19 14:20:48.662629] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:50.222 [2024-11-19 14:20:48.662638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:50.222 [2024-11-19 14:20:48.662646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.222 [2024-11-19 14:20:48.663927] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 291.121 ms, result 0 00:19:51.169  [2024-11-19T14:20:51.118Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-19T14:20:51.690Z] Copying: 46/1024 [MB] (30 MBps) [2024-11-19T14:20:53.079Z] Copying: 61/1024 [MB] (14 MBps) [2024-11-19T14:20:54.023Z] Copying: 83/1024 [MB] (21 MBps) [2024-11-19T14:20:54.967Z] Copying: 112/1024 [MB] (29 MBps) [2024-11-19T14:20:55.910Z] Copying: 142/1024 [MB] (29 MBps) [2024-11-19T14:20:56.854Z] Copying: 159/1024 [MB] (16 MBps) [2024-11-19T14:20:57.799Z] Copying: 173/1024 [MB] (14 MBps) [2024-11-19T14:20:58.759Z] Copying: 191/1024 [MB] (18 MBps) [2024-11-19T14:20:59.704Z] Copying: 202/1024 [MB] (10 MBps) [2024-11-19T14:21:01.093Z] Copying: 223/1024 [MB] (20 MBps) [2024-11-19T14:21:02.037Z] Copying: 235/1024 [MB] (12 MBps) [2024-11-19T14:21:02.981Z] Copying: 256/1024 [MB] (20 MBps) [2024-11-19T14:21:03.927Z] Copying: 267/1024 [MB] (11 MBps) [2024-11-19T14:21:04.873Z] Copying: 280/1024 [MB] (13 MBps) [2024-11-19T14:21:05.817Z] Copying: 292/1024 [MB] (12 MBps) [2024-11-19T14:21:06.761Z] Copying: 319/1024 [MB] (26 MBps) [2024-11-19T14:21:07.704Z] Copying: 336/1024 [MB] (16 MBps) [2024-11-19T14:21:09.092Z] Copying: 357/1024 [MB] (21 MBps) [2024-11-19T14:21:10.036Z] Copying: 370/1024 [MB] (12 MBps) [2024-11-19T14:21:10.980Z] Copying: 392/1024 [MB] (21 MBps) [2024-11-19T14:21:11.926Z] Copying: 408/1024 [MB] (16 MBps) [2024-11-19T14:21:12.869Z] Copying: 424/1024 [MB] (15 MBps) [2024-11-19T14:21:13.812Z] Copying: 453/1024 [MB] (29 MBps) [2024-11-19T14:21:14.755Z] Copying: 467/1024 [MB] (13 MBps) [2024-11-19T14:21:15.699Z] Copying: 479/1024 [MB] (12 MBps) [2024-11-19T14:21:16.741Z] Copying: 493/1024 [MB] (13 MBps) [2024-11-19T14:21:17.683Z] Copying: 504/1024 [MB] (11 MBps) [2024-11-19T14:21:19.069Z] Copying: 535/1024 [MB] (30 MBps) [2024-11-19T14:21:20.038Z] Copying: 564/1024 [MB] (29 MBps) [2024-11-19T14:21:20.981Z] Copying: 593/1024 [MB] (29 MBps) [2024-11-19T14:21:21.922Z] Copying: 624/1024 [MB] (30 MBps) [2024-11-19T14:21:22.863Z] Copying: 645/1024 [MB] (21 MBps) [2024-11-19T14:21:23.816Z] Copying: 659/1024 [MB] (14 MBps) [2024-11-19T14:21:24.758Z] Copying: 675/1024 [MB] (16 MBps) [2024-11-19T14:21:25.700Z] Copying: 692/1024 [MB] (16 MBps) [2024-11-19T14:21:27.085Z] Copying: 711/1024 [MB] (19 MBps) [2024-11-19T14:21:28.028Z] Copying: 728/1024 [MB] (17 MBps) [2024-11-19T14:21:28.969Z] Copying: 748/1024 [MB] (19 MBps) [2024-11-19T14:21:29.911Z] Copying: 770/1024 [MB] (22 MBps) [2024-11-19T14:21:30.851Z] Copying: 783/1024 [MB] (12 MBps) [2024-11-19T14:21:31.793Z] Copying: 798/1024 [MB] (15 MBps) [2024-11-19T14:21:32.737Z] Copying: 814/1024 [MB] (15 MBps) [2024-11-19T14:21:33.680Z] Copying: 834/1024 [MB] (19 MBps) [2024-11-19T14:21:35.067Z] Copying: 855/1024 [MB] (20 MBps) [2024-11-19T14:21:36.009Z] Copying: 871/1024 [MB] (16 MBps) [2024-11-19T14:21:36.950Z] Copying: 886/1024 [MB] (14 MBps) [2024-11-19T14:21:37.894Z] Copying: 913/1024 [MB] (27 MBps) [2024-11-19T14:21:38.839Z] Copying: 924/1024 [MB] (10 MBps) [2024-11-19T14:21:39.782Z] Copying: 942/1024 [MB] (18 MBps) [2024-11-19T14:21:40.727Z] Copying: 956/1024 [MB] (14 MBps) [2024-11-19T14:21:42.116Z] Copying: 971/1024 [MB] (14 MBps) [2024-11-19T14:21:42.689Z] Copying: 990/1024 [MB] (19 MBps) [2024-11-19T14:21:44.078Z] Copying: 1015/1024 [MB] (25 MBps) [2024-11-19T14:21:44.339Z] Copying: 1048036/1048576 [kB] (8316 kBps) [2024-11-19T14:21:44.340Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-19 14:21:44.172894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.778 [2024-11-19 14:21:44.172976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:45.778 [2024-11-19 14:21:44.172992] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:45.778 [2024-11-19 14:21:44.173001] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.778 [2024-11-19 14:21:44.176124] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:45.778 [2024-11-19 14:21:44.181099] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.778 [2024-11-19 14:21:44.181291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:45.778 [2024-11-19 14:21:44.181313] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.921 ms 00:20:45.778 [2024-11-19 14:21:44.181321] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.778 [2024-11-19 14:21:44.193518] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.778 [2024-11-19 14:21:44.193566] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:45.778 [2024-11-19 14:21:44.193591] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.211 ms 00:20:45.778 [2024-11-19 14:21:44.193599] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.778 [2024-11-19 14:21:44.220291] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.778 [2024-11-19 14:21:44.220504] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:45.778 [2024-11-19 14:21:44.220529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.672 ms 00:20:45.778 [2024-11-19 14:21:44.220539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.778 [2024-11-19 14:21:44.226749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.778 [2024-11-19 14:21:44.226797] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:45.778 [2024-11-19 14:21:44.226810] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.110 ms 00:20:45.778 [2024-11-19 14:21:44.226828] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.778 [2024-11-19 14:21:44.253762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.778 [2024-11-19 14:21:44.253812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:45.778 [2024-11-19 14:21:44.253825] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.843 ms 00:20:45.778 [2024-11-19 14:21:44.253833] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.778 [2024-11-19 14:21:44.270986] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.778 [2024-11-19 14:21:44.271034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:45.778 [2024-11-19 14:21:44.271046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.089 ms 00:20:45.778 [2024-11-19 14:21:44.271054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.040 [2024-11-19 14:21:44.413159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.040 [2024-11-19 14:21:44.413348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:46.040 [2024-11-19 14:21:44.413372] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 142.051 ms 00:20:46.040 [2024-11-19 14:21:44.413382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.040 [2024-11-19 14:21:44.440085] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.040 [2024-11-19 14:21:44.440279] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:46.040 [2024-11-19 14:21:44.440301] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.672 ms 00:20:46.041 [2024-11-19 14:21:44.440310] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.041 [2024-11-19 14:21:44.466491] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.041 [2024-11-19 14:21:44.466535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:46.041 [2024-11-19 14:21:44.466560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.119 ms 00:20:46.041 [2024-11-19 14:21:44.466568] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.041 [2024-11-19 14:21:44.491688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.041 [2024-11-19 14:21:44.491730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:46.041 [2024-11-19 14:21:44.491742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.074 ms 00:20:46.041 [2024-11-19 14:21:44.491750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.041 [2024-11-19 14:21:44.517159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.041 [2024-11-19 14:21:44.517200] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:46.041 [2024-11-19 14:21:44.517212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.308 ms 00:20:46.041 [2024-11-19 14:21:44.517220] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.041 [2024-11-19 14:21:44.517266] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:46.041 [2024-11-19 14:21:44.517281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 92672 / 261120 wr_cnt: 1 state: open 00:20:46.041 [2024-11-19 14:21:44.517294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:46.041 [2024-11-19 14:21:44.517914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:46.042 [2024-11-19 14:21:44.517922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:46.042 [2024-11-19 14:21:44.517931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:46.042 [2024-11-19 14:21:44.517939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:46.042 [2024-11-19 14:21:44.517947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:46.042 [2024-11-19 14:21:44.517955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:46.042 [2024-11-19 14:21:44.517964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:46.042 [2024-11-19 14:21:44.517971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:46.042 [2024-11-19 14:21:44.517980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:46.042 [2024-11-19 14:21:44.517989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:46.042 [2024-11-19 14:21:44.517997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:46.042 [2024-11-19 14:21:44.518006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:46.042 [2024-11-19 14:21:44.518037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:46.042 [2024-11-19 14:21:44.518045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:46.042 [2024-11-19 14:21:44.518053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:46.042 [2024-11-19 14:21:44.518062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:46.042 [2024-11-19 14:21:44.518070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:46.042 [2024-11-19 14:21:44.518078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:46.042 [2024-11-19 14:21:44.518087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:46.042 [2024-11-19 14:21:44.518094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:46.042 [2024-11-19 14:21:44.518102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:46.042 [2024-11-19 14:21:44.518110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:46.042 [2024-11-19 14:21:44.518119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:46.042 [2024-11-19 14:21:44.518133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:46.042 [2024-11-19 14:21:44.518141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:46.042 [2024-11-19 14:21:44.518157] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:46.042 [2024-11-19 14:21:44.518166] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 75a07b1c-071a-49d1-8758-82829c9986d4 00:20:46.042 [2024-11-19 14:21:44.518175] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 92672 00:20:46.042 [2024-11-19 14:21:44.518184] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 93632 00:20:46.042 [2024-11-19 14:21:44.518192] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 92672 00:20:46.042 [2024-11-19 14:21:44.518205] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0104 00:20:46.042 [2024-11-19 14:21:44.518225] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:46.042 [2024-11-19 14:21:44.518233] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:46.042 [2024-11-19 14:21:44.518241] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:46.042 [2024-11-19 14:21:44.518255] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:46.042 [2024-11-19 14:21:44.518262] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:46.042 [2024-11-19 14:21:44.518270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.042 [2024-11-19 14:21:44.518278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:46.042 [2024-11-19 14:21:44.518286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.005 ms 00:20:46.042 [2024-11-19 14:21:44.518294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.042 [2024-11-19 14:21:44.531974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.042 [2024-11-19 14:21:44.532023] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:46.042 [2024-11-19 14:21:44.532035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.632 ms 00:20:46.042 [2024-11-19 14:21:44.532042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.042 [2024-11-19 14:21:44.532255] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.042 [2024-11-19 14:21:44.532264] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:46.042 [2024-11-19 14:21:44.532273] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:20:46.042 [2024-11-19 14:21:44.532280] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.042 [2024-11-19 14:21:44.571628] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.042 [2024-11-19 14:21:44.571676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:46.042 [2024-11-19 14:21:44.571688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.042 [2024-11-19 14:21:44.571697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.042 [2024-11-19 14:21:44.571756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.042 [2024-11-19 14:21:44.571764] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:46.042 [2024-11-19 14:21:44.571773] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.042 [2024-11-19 14:21:44.571781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.042 [2024-11-19 14:21:44.571856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.042 [2024-11-19 14:21:44.571873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:46.042 [2024-11-19 14:21:44.571910] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.042 [2024-11-19 14:21:44.571919] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.042 [2024-11-19 14:21:44.571935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.042 [2024-11-19 14:21:44.571943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:46.042 [2024-11-19 14:21:44.571951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.042 [2024-11-19 14:21:44.571959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.303 [2024-11-19 14:21:44.653752] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.303 [2024-11-19 14:21:44.653974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:46.303 [2024-11-19 14:21:44.653997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.303 [2024-11-19 14:21:44.654005] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.303 [2024-11-19 14:21:44.686009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.303 [2024-11-19 14:21:44.686055] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:46.303 [2024-11-19 14:21:44.686066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.303 [2024-11-19 14:21:44.686074] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.303 [2024-11-19 14:21:44.686141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.303 [2024-11-19 14:21:44.686151] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:46.303 [2024-11-19 14:21:44.686167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.304 [2024-11-19 14:21:44.686175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.304 [2024-11-19 14:21:44.686219] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.304 [2024-11-19 14:21:44.686229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:46.304 [2024-11-19 14:21:44.686237] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.304 [2024-11-19 14:21:44.686246] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.304 [2024-11-19 14:21:44.686350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.304 [2024-11-19 14:21:44.686361] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:46.304 [2024-11-19 14:21:44.686370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.304 [2024-11-19 14:21:44.686381] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.304 [2024-11-19 14:21:44.686413] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.304 [2024-11-19 14:21:44.686422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:46.304 [2024-11-19 14:21:44.686430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.304 [2024-11-19 14:21:44.686438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.304 [2024-11-19 14:21:44.686480] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.304 [2024-11-19 14:21:44.686489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:46.304 [2024-11-19 14:21:44.686498] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.304 [2024-11-19 14:21:44.686509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.304 [2024-11-19 14:21:44.686558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.304 [2024-11-19 14:21:44.686568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:46.304 [2024-11-19 14:21:44.686577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.304 [2024-11-19 14:21:44.686584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.304 [2024-11-19 14:21:44.686718] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 516.714 ms, result 0 00:20:47.693 00:20:47.693 00:20:47.693 14:21:46 -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:20:47.693 [2024-11-19 14:21:46.166512] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:47.693 [2024-11-19 14:21:46.166673] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75218 ] 00:20:47.955 [2024-11-19 14:21:46.320822] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.216 [2024-11-19 14:21:46.533062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.479 [2024-11-19 14:21:46.822421] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:48.479 [2024-11-19 14:21:46.822680] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:48.479 [2024-11-19 14:21:46.978006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.479 [2024-11-19 14:21:46.978062] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:48.479 [2024-11-19 14:21:46.978078] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:48.479 [2024-11-19 14:21:46.978090] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.479 [2024-11-19 14:21:46.978141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.479 [2024-11-19 14:21:46.978151] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:48.479 [2024-11-19 14:21:46.978160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:48.479 [2024-11-19 14:21:46.978168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.479 [2024-11-19 14:21:46.978188] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:48.479 [2024-11-19 14:21:46.979013] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:48.479 [2024-11-19 14:21:46.979031] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.479 [2024-11-19 14:21:46.979040] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:48.479 [2024-11-19 14:21:46.979050] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.848 ms 00:20:48.479 [2024-11-19 14:21:46.979058] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.479 [2024-11-19 14:21:46.980816] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:48.479 [2024-11-19 14:21:46.995570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.479 [2024-11-19 14:21:46.995620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:48.479 [2024-11-19 14:21:46.995635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.756 ms 00:20:48.479 [2024-11-19 14:21:46.995643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.479 [2024-11-19 14:21:46.995720] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.479 [2024-11-19 14:21:46.995730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:48.479 [2024-11-19 14:21:46.995739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:48.479 [2024-11-19 14:21:46.995747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.479 [2024-11-19 14:21:47.003853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.479 [2024-11-19 14:21:47.003919] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:48.479 [2024-11-19 14:21:47.003930] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.028 ms 00:20:48.479 [2024-11-19 14:21:47.003938] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.479 [2024-11-19 14:21:47.004034] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.479 [2024-11-19 14:21:47.004063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:48.479 [2024-11-19 14:21:47.004073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:20:48.479 [2024-11-19 14:21:47.004081] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.479 [2024-11-19 14:21:47.004125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.479 [2024-11-19 14:21:47.004134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:48.479 [2024-11-19 14:21:47.004143] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:48.479 [2024-11-19 14:21:47.004150] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.479 [2024-11-19 14:21:47.004180] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:48.479 [2024-11-19 14:21:47.008360] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.479 [2024-11-19 14:21:47.008397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:48.479 [2024-11-19 14:21:47.008408] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.192 ms 00:20:48.479 [2024-11-19 14:21:47.008415] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.479 [2024-11-19 14:21:47.008454] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.479 [2024-11-19 14:21:47.008462] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:48.479 [2024-11-19 14:21:47.008471] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:48.479 [2024-11-19 14:21:47.008481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.479 [2024-11-19 14:21:47.008531] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:48.479 [2024-11-19 14:21:47.008554] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:48.479 [2024-11-19 14:21:47.008589] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:48.479 [2024-11-19 14:21:47.008606] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:48.479 [2024-11-19 14:21:47.008682] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:48.479 [2024-11-19 14:21:47.008693] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:48.479 [2024-11-19 14:21:47.008706] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:48.479 [2024-11-19 14:21:47.008716] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:48.479 [2024-11-19 14:21:47.008725] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:48.479 [2024-11-19 14:21:47.008734] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:48.479 [2024-11-19 14:21:47.008742] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:48.479 [2024-11-19 14:21:47.008750] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:48.479 [2024-11-19 14:21:47.008757] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:48.479 [2024-11-19 14:21:47.008766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.479 [2024-11-19 14:21:47.008774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:48.479 [2024-11-19 14:21:47.008782] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:20:48.479 [2024-11-19 14:21:47.008789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.479 [2024-11-19 14:21:47.008852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.479 [2024-11-19 14:21:47.008860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:48.479 [2024-11-19 14:21:47.008868] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:20:48.479 [2024-11-19 14:21:47.008897] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.479 [2024-11-19 14:21:47.008977] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:48.479 [2024-11-19 14:21:47.008988] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:48.479 [2024-11-19 14:21:47.008996] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:48.479 [2024-11-19 14:21:47.009005] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:48.479 [2024-11-19 14:21:47.009014] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:48.479 [2024-11-19 14:21:47.009021] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:48.479 [2024-11-19 14:21:47.009027] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:48.479 [2024-11-19 14:21:47.009034] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:48.479 [2024-11-19 14:21:47.009041] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:48.479 [2024-11-19 14:21:47.009047] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:48.479 [2024-11-19 14:21:47.009054] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:48.480 [2024-11-19 14:21:47.009062] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:48.480 [2024-11-19 14:21:47.009069] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:48.480 [2024-11-19 14:21:47.009076] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:48.480 [2024-11-19 14:21:47.009084] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:20:48.480 [2024-11-19 14:21:47.009090] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:48.480 [2024-11-19 14:21:47.009105] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:48.480 [2024-11-19 14:21:47.009111] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:20:48.480 [2024-11-19 14:21:47.009118] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:48.480 [2024-11-19 14:21:47.009126] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:48.480 [2024-11-19 14:21:47.009133] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:20:48.480 [2024-11-19 14:21:47.009139] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:48.480 [2024-11-19 14:21:47.009146] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:48.480 [2024-11-19 14:21:47.009152] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:48.480 [2024-11-19 14:21:47.009159] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:48.480 [2024-11-19 14:21:47.009166] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:48.480 [2024-11-19 14:21:47.009172] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:20:48.480 [2024-11-19 14:21:47.009178] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:48.480 [2024-11-19 14:21:47.009184] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:48.480 [2024-11-19 14:21:47.009191] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:48.480 [2024-11-19 14:21:47.009198] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:48.480 [2024-11-19 14:21:47.009204] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:48.480 [2024-11-19 14:21:47.009210] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:20:48.480 [2024-11-19 14:21:47.009216] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:48.480 [2024-11-19 14:21:47.009223] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:48.480 [2024-11-19 14:21:47.009229] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:48.480 [2024-11-19 14:21:47.009235] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:48.480 [2024-11-19 14:21:47.009242] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:48.480 [2024-11-19 14:21:47.009249] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:20:48.480 [2024-11-19 14:21:47.009255] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:48.480 [2024-11-19 14:21:47.009263] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:48.480 [2024-11-19 14:21:47.009274] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:48.480 [2024-11-19 14:21:47.009281] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:48.480 [2024-11-19 14:21:47.009289] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:48.480 [2024-11-19 14:21:47.009297] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:48.480 [2024-11-19 14:21:47.009304] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:48.480 [2024-11-19 14:21:47.009311] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:48.480 [2024-11-19 14:21:47.009318] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:48.480 [2024-11-19 14:21:47.009326] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:48.480 [2024-11-19 14:21:47.009332] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:48.480 [2024-11-19 14:21:47.009340] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:48.480 [2024-11-19 14:21:47.009349] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:48.480 [2024-11-19 14:21:47.009358] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:48.480 [2024-11-19 14:21:47.009366] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:20:48.480 [2024-11-19 14:21:47.009372] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:20:48.480 [2024-11-19 14:21:47.009379] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:20:48.480 [2024-11-19 14:21:47.009386] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:20:48.480 [2024-11-19 14:21:47.009393] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:20:48.480 [2024-11-19 14:21:47.009400] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:20:48.480 [2024-11-19 14:21:47.009408] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:20:48.480 [2024-11-19 14:21:47.009414] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:20:48.480 [2024-11-19 14:21:47.009422] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:20:48.480 [2024-11-19 14:21:47.009429] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:20:48.480 [2024-11-19 14:21:47.009436] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:20:48.480 [2024-11-19 14:21:47.009444] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:20:48.480 [2024-11-19 14:21:47.009450] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:48.480 [2024-11-19 14:21:47.009458] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:48.480 [2024-11-19 14:21:47.009467] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:48.480 [2024-11-19 14:21:47.009474] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:48.480 [2024-11-19 14:21:47.009481] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:48.480 [2024-11-19 14:21:47.009489] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:48.480 [2024-11-19 14:21:47.009496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.480 [2024-11-19 14:21:47.009503] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:48.480 [2024-11-19 14:21:47.009511] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:20:48.480 [2024-11-19 14:21:47.009518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.480 [2024-11-19 14:21:47.028310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.480 [2024-11-19 14:21:47.028379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:48.480 [2024-11-19 14:21:47.028398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.748 ms 00:20:48.480 [2024-11-19 14:21:47.028417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.480 [2024-11-19 14:21:47.028539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.480 [2024-11-19 14:21:47.028553] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:48.480 [2024-11-19 14:21:47.028567] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:20:48.480 [2024-11-19 14:21:47.028581] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.746 [2024-11-19 14:21:47.075862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.746 [2024-11-19 14:21:47.075932] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:48.746 [2024-11-19 14:21:47.075946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.208 ms 00:20:48.746 [2024-11-19 14:21:47.075955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.746 [2024-11-19 14:21:47.076005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.746 [2024-11-19 14:21:47.076015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:48.746 [2024-11-19 14:21:47.076024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:48.746 [2024-11-19 14:21:47.076032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.746 [2024-11-19 14:21:47.076597] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.746 [2024-11-19 14:21:47.076620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:48.746 [2024-11-19 14:21:47.076636] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.512 ms 00:20:48.746 [2024-11-19 14:21:47.076644] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.746 [2024-11-19 14:21:47.076766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.746 [2024-11-19 14:21:47.076775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:48.746 [2024-11-19 14:21:47.076784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:20:48.746 [2024-11-19 14:21:47.076792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.746 [2024-11-19 14:21:47.093377] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.746 [2024-11-19 14:21:47.093420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:48.746 [2024-11-19 14:21:47.093431] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.561 ms 00:20:48.746 [2024-11-19 14:21:47.093439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.746 [2024-11-19 14:21:47.107557] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:20:48.746 [2024-11-19 14:21:47.107745] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:48.746 [2024-11-19 14:21:47.107765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.746 [2024-11-19 14:21:47.107773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:48.746 [2024-11-19 14:21:47.107783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.213 ms 00:20:48.746 [2024-11-19 14:21:47.107790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.746 [2024-11-19 14:21:47.134285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.746 [2024-11-19 14:21:47.134461] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:48.746 [2024-11-19 14:21:47.134483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.451 ms 00:20:48.746 [2024-11-19 14:21:47.134491] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.746 [2024-11-19 14:21:47.147331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.746 [2024-11-19 14:21:47.147389] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:48.746 [2024-11-19 14:21:47.147402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.787 ms 00:20:48.746 [2024-11-19 14:21:47.147410] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.746 [2024-11-19 14:21:47.160106] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.746 [2024-11-19 14:21:47.160147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:48.746 [2024-11-19 14:21:47.160169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.648 ms 00:20:48.746 [2024-11-19 14:21:47.160176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.746 [2024-11-19 14:21:47.160565] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.746 [2024-11-19 14:21:47.160578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:48.746 [2024-11-19 14:21:47.160588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:20:48.746 [2024-11-19 14:21:47.160596] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.746 [2024-11-19 14:21:47.228371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.746 [2024-11-19 14:21:47.228430] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:48.746 [2024-11-19 14:21:47.228446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.756 ms 00:20:48.746 [2024-11-19 14:21:47.228454] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.746 [2024-11-19 14:21:47.240125] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:48.746 [2024-11-19 14:21:47.243308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.746 [2024-11-19 14:21:47.243352] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:48.746 [2024-11-19 14:21:47.243381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.788 ms 00:20:48.746 [2024-11-19 14:21:47.243389] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.746 [2024-11-19 14:21:47.243468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.746 [2024-11-19 14:21:47.243479] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:48.746 [2024-11-19 14:21:47.243489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:48.746 [2024-11-19 14:21:47.243496] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.746 [2024-11-19 14:21:47.244912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.746 [2024-11-19 14:21:47.244956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:48.746 [2024-11-19 14:21:47.244968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.379 ms 00:20:48.746 [2024-11-19 14:21:47.244983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.746 [2024-11-19 14:21:47.246321] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.746 [2024-11-19 14:21:47.246361] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:48.746 [2024-11-19 14:21:47.246371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.312 ms 00:20:48.746 [2024-11-19 14:21:47.246379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.746 [2024-11-19 14:21:47.246414] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.746 [2024-11-19 14:21:47.246428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:48.746 [2024-11-19 14:21:47.246437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:48.746 [2024-11-19 14:21:47.246444] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.746 [2024-11-19 14:21:47.246481] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:48.746 [2024-11-19 14:21:47.246495] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.746 [2024-11-19 14:21:47.246503] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:48.746 [2024-11-19 14:21:47.246511] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:48.746 [2024-11-19 14:21:47.246518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.746 [2024-11-19 14:21:47.273068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.746 [2024-11-19 14:21:47.273227] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:48.746 [2024-11-19 14:21:47.273291] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.530 ms 00:20:48.746 [2024-11-19 14:21:47.273322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.746 [2024-11-19 14:21:47.273690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.747 [2024-11-19 14:21:47.273792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:48.747 [2024-11-19 14:21:47.273915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:48.747 [2024-11-19 14:21:47.273944] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.747 [2024-11-19 14:21:47.280062] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 301.044 ms, result 0 00:20:50.218  [2024-11-19T14:21:49.728Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-19T14:21:50.672Z] Copying: 32/1024 [MB] (22 MBps) [2024-11-19T14:21:51.613Z] Copying: 42/1024 [MB] (10 MBps) [2024-11-19T14:21:52.558Z] Copying: 52/1024 [MB] (10 MBps) [2024-11-19T14:21:53.502Z] Copying: 63/1024 [MB] (10 MBps) [2024-11-19T14:21:54.888Z] Copying: 74/1024 [MB] (10 MBps) [2024-11-19T14:21:55.834Z] Copying: 84/1024 [MB] (10 MBps) [2024-11-19T14:21:56.778Z] Copying: 105/1024 [MB] (20 MBps) [2024-11-19T14:21:57.745Z] Copying: 116/1024 [MB] (11 MBps) [2024-11-19T14:21:58.686Z] Copying: 130/1024 [MB] (13 MBps) [2024-11-19T14:21:59.626Z] Copying: 140/1024 [MB] (10 MBps) [2024-11-19T14:22:00.569Z] Copying: 151/1024 [MB] (10 MBps) [2024-11-19T14:22:01.514Z] Copying: 163/1024 [MB] (11 MBps) [2024-11-19T14:22:02.900Z] Copying: 174/1024 [MB] (10 MBps) [2024-11-19T14:22:03.473Z] Copying: 189/1024 [MB] (15 MBps) [2024-11-19T14:22:04.861Z] Copying: 200/1024 [MB] (10 MBps) [2024-11-19T14:22:05.807Z] Copying: 211/1024 [MB] (10 MBps) [2024-11-19T14:22:06.753Z] Copying: 227/1024 [MB] (15 MBps) [2024-11-19T14:22:07.696Z] Copying: 239/1024 [MB] (11 MBps) [2024-11-19T14:22:08.640Z] Copying: 256/1024 [MB] (17 MBps) [2024-11-19T14:22:09.585Z] Copying: 275/1024 [MB] (19 MBps) [2024-11-19T14:22:10.530Z] Copying: 293/1024 [MB] (18 MBps) [2024-11-19T14:22:11.479Z] Copying: 312/1024 [MB] (18 MBps) [2024-11-19T14:22:12.867Z] Copying: 325/1024 [MB] (12 MBps) [2024-11-19T14:22:13.812Z] Copying: 341/1024 [MB] (16 MBps) [2024-11-19T14:22:14.757Z] Copying: 354/1024 [MB] (12 MBps) [2024-11-19T14:22:15.701Z] Copying: 365/1024 [MB] (10 MBps) [2024-11-19T14:22:16.644Z] Copying: 383/1024 [MB] (17 MBps) [2024-11-19T14:22:17.589Z] Copying: 397/1024 [MB] (14 MBps) [2024-11-19T14:22:18.536Z] Copying: 411/1024 [MB] (14 MBps) [2024-11-19T14:22:19.563Z] Copying: 423/1024 [MB] (11 MBps) [2024-11-19T14:22:20.508Z] Copying: 441/1024 [MB] (17 MBps) [2024-11-19T14:22:21.895Z] Copying: 469/1024 [MB] (28 MBps) [2024-11-19T14:22:22.469Z] Copying: 487/1024 [MB] (17 MBps) [2024-11-19T14:22:23.859Z] Copying: 504/1024 [MB] (16 MBps) [2024-11-19T14:22:24.803Z] Copying: 519/1024 [MB] (15 MBps) [2024-11-19T14:22:25.747Z] Copying: 536/1024 [MB] (17 MBps) [2024-11-19T14:22:26.691Z] Copying: 553/1024 [MB] (16 MBps) [2024-11-19T14:22:27.633Z] Copying: 567/1024 [MB] (13 MBps) [2024-11-19T14:22:28.578Z] Copying: 593/1024 [MB] (25 MBps) [2024-11-19T14:22:29.518Z] Copying: 612/1024 [MB] (19 MBps) [2024-11-19T14:22:30.902Z] Copying: 627/1024 [MB] (14 MBps) [2024-11-19T14:22:31.476Z] Copying: 639/1024 [MB] (12 MBps) [2024-11-19T14:22:32.865Z] Copying: 656/1024 [MB] (16 MBps) [2024-11-19T14:22:33.811Z] Copying: 678/1024 [MB] (22 MBps) [2024-11-19T14:22:34.756Z] Copying: 692/1024 [MB] (14 MBps) [2024-11-19T14:22:35.701Z] Copying: 712/1024 [MB] (20 MBps) [2024-11-19T14:22:36.645Z] Copying: 730/1024 [MB] (18 MBps) [2024-11-19T14:22:37.594Z] Copying: 755/1024 [MB] (24 MBps) [2024-11-19T14:22:38.537Z] Copying: 768/1024 [MB] (13 MBps) [2024-11-19T14:22:39.481Z] Copying: 784/1024 [MB] (16 MBps) [2024-11-19T14:22:40.870Z] Copying: 795/1024 [MB] (10 MBps) [2024-11-19T14:22:41.817Z] Copying: 811/1024 [MB] (15 MBps) [2024-11-19T14:22:42.762Z] Copying: 824/1024 [MB] (13 MBps) [2024-11-19T14:22:43.708Z] Copying: 841/1024 [MB] (17 MBps) [2024-11-19T14:22:44.653Z] Copying: 859/1024 [MB] (17 MBps) [2024-11-19T14:22:45.598Z] Copying: 874/1024 [MB] (14 MBps) [2024-11-19T14:22:46.542Z] Copying: 890/1024 [MB] (16 MBps) [2024-11-19T14:22:47.487Z] Copying: 904/1024 [MB] (13 MBps) [2024-11-19T14:22:48.874Z] Copying: 917/1024 [MB] (13 MBps) [2024-11-19T14:22:49.818Z] Copying: 934/1024 [MB] (17 MBps) [2024-11-19T14:22:50.821Z] Copying: 947/1024 [MB] (12 MBps) [2024-11-19T14:22:51.823Z] Copying: 963/1024 [MB] (16 MBps) [2024-11-19T14:22:52.768Z] Copying: 979/1024 [MB] (15 MBps) [2024-11-19T14:22:53.713Z] Copying: 990/1024 [MB] (10 MBps) [2024-11-19T14:22:54.658Z] Copying: 1003/1024 [MB] (12 MBps) [2024-11-19T14:22:54.920Z] Copying: 1019/1024 [MB] (16 MBps) [2024-11-19T14:22:54.920Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-19 14:22:54.738607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.358 [2024-11-19 14:22:54.738650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:56.358 [2024-11-19 14:22:54.738661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:56.358 [2024-11-19 14:22:54.738668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.358 [2024-11-19 14:22:54.738684] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:56.358 [2024-11-19 14:22:54.740981] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.358 [2024-11-19 14:22:54.741003] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:56.358 [2024-11-19 14:22:54.741012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.286 ms 00:21:56.358 [2024-11-19 14:22:54.741019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.358 [2024-11-19 14:22:54.741206] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.358 [2024-11-19 14:22:54.741218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:56.358 [2024-11-19 14:22:54.741224] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:21:56.358 [2024-11-19 14:22:54.741230] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.358 [2024-11-19 14:22:54.746066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.358 [2024-11-19 14:22:54.746093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:56.358 [2024-11-19 14:22:54.746102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.824 ms 00:21:56.358 [2024-11-19 14:22:54.746110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.358 [2024-11-19 14:22:54.751239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.358 [2024-11-19 14:22:54.751259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:21:56.358 [2024-11-19 14:22:54.751270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.100 ms 00:21:56.358 [2024-11-19 14:22:54.751276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.358 [2024-11-19 14:22:54.770723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.358 [2024-11-19 14:22:54.770748] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:56.358 [2024-11-19 14:22:54.770756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.406 ms 00:21:56.358 [2024-11-19 14:22:54.770762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.358 [2024-11-19 14:22:54.782786] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.358 [2024-11-19 14:22:54.782809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:56.358 [2024-11-19 14:22:54.782818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.997 ms 00:21:56.358 [2024-11-19 14:22:54.782824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.358 [2024-11-19 14:22:54.908381] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.358 [2024-11-19 14:22:54.908407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:56.358 [2024-11-19 14:22:54.908415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 125.526 ms 00:21:56.358 [2024-11-19 14:22:54.908421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.621 [2024-11-19 14:22:54.927361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.621 [2024-11-19 14:22:54.927384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:56.621 [2024-11-19 14:22:54.927391] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.925 ms 00:21:56.621 [2024-11-19 14:22:54.927397] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.621 [2024-11-19 14:22:54.946100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.621 [2024-11-19 14:22:54.946120] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:56.621 [2024-11-19 14:22:54.946128] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.671 ms 00:21:56.621 [2024-11-19 14:22:54.946140] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.621 [2024-11-19 14:22:54.964110] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.621 [2024-11-19 14:22:54.964130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:56.621 [2024-11-19 14:22:54.964137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.944 ms 00:21:56.621 [2024-11-19 14:22:54.964142] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.621 [2024-11-19 14:22:54.982065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.621 [2024-11-19 14:22:54.982085] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:56.621 [2024-11-19 14:22:54.982092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.871 ms 00:21:56.621 [2024-11-19 14:22:54.982097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.621 [2024-11-19 14:22:54.982122] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:56.621 [2024-11-19 14:22:54.982133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133376 / 261120 wr_cnt: 1 state: open 00:21:56.621 [2024-11-19 14:22:54.982140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:56.621 [2024-11-19 14:22:54.982147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:56.621 [2024-11-19 14:22:54.982152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:56.621 [2024-11-19 14:22:54.982159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:56.621 [2024-11-19 14:22:54.982164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:56.621 [2024-11-19 14:22:54.982170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:56.621 [2024-11-19 14:22:54.982176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:56.621 [2024-11-19 14:22:54.982181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:56.621 [2024-11-19 14:22:54.982187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:56.621 [2024-11-19 14:22:54.982193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:56.621 [2024-11-19 14:22:54.982198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:56.621 [2024-11-19 14:22:54.982204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:56.621 [2024-11-19 14:22:54.982209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:56.621 [2024-11-19 14:22:54.982215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:56.621 [2024-11-19 14:22:54.982220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:56.621 [2024-11-19 14:22:54.982227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:56.621 [2024-11-19 14:22:54.982233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:56.621 [2024-11-19 14:22:54.982239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:56.621 [2024-11-19 14:22:54.982244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:56.621 [2024-11-19 14:22:54.982250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:56.621 [2024-11-19 14:22:54.982256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:56.621 [2024-11-19 14:22:54.982261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:56.621 [2024-11-19 14:22:54.982267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:56.621 [2024-11-19 14:22:54.982274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:56.621 [2024-11-19 14:22:54.982280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:56.621 [2024-11-19 14:22:54.982285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:56.621 [2024-11-19 14:22:54.982291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:56.621 [2024-11-19 14:22:54.982297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:56.621 [2024-11-19 14:22:54.982302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:56.621 [2024-11-19 14:22:54.982308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:56.622 [2024-11-19 14:22:54.982709] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:56.622 [2024-11-19 14:22:54.982715] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 75a07b1c-071a-49d1-8758-82829c9986d4 00:21:56.622 [2024-11-19 14:22:54.982720] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133376 00:21:56.622 [2024-11-19 14:22:54.982726] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 41664 00:21:56.622 [2024-11-19 14:22:54.982734] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 40704 00:21:56.622 [2024-11-19 14:22:54.982740] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0236 00:21:56.622 [2024-11-19 14:22:54.982745] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:56.622 [2024-11-19 14:22:54.982751] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:56.622 [2024-11-19 14:22:54.982756] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:56.622 [2024-11-19 14:22:54.982760] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:56.622 [2024-11-19 14:22:54.982769] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:56.622 [2024-11-19 14:22:54.982774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.622 [2024-11-19 14:22:54.982782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:56.622 [2024-11-19 14:22:54.982788] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.653 ms 00:21:56.622 [2024-11-19 14:22:54.982793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.622 [2024-11-19 14:22:54.992299] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.622 [2024-11-19 14:22:54.992322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:56.622 [2024-11-19 14:22:54.992329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.487 ms 00:21:56.622 [2024-11-19 14:22:54.992335] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.622 [2024-11-19 14:22:54.992477] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.622 [2024-11-19 14:22:54.992484] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:56.622 [2024-11-19 14:22:54.992490] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:21:56.622 [2024-11-19 14:22:54.992495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.622 [2024-11-19 14:22:55.019972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.622 [2024-11-19 14:22:55.019994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:56.623 [2024-11-19 14:22:55.020001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.623 [2024-11-19 14:22:55.020007] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.623 [2024-11-19 14:22:55.020050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.623 [2024-11-19 14:22:55.020056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:56.623 [2024-11-19 14:22:55.020062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.623 [2024-11-19 14:22:55.020067] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.623 [2024-11-19 14:22:55.020115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.623 [2024-11-19 14:22:55.020122] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:56.623 [2024-11-19 14:22:55.020129] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.623 [2024-11-19 14:22:55.020134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.623 [2024-11-19 14:22:55.020145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.623 [2024-11-19 14:22:55.020151] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:56.623 [2024-11-19 14:22:55.020157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.623 [2024-11-19 14:22:55.020163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.623 [2024-11-19 14:22:55.077292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.623 [2024-11-19 14:22:55.077319] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:56.623 [2024-11-19 14:22:55.077326] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.623 [2024-11-19 14:22:55.077332] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.623 [2024-11-19 14:22:55.099903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.623 [2024-11-19 14:22:55.099931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:56.623 [2024-11-19 14:22:55.099938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.623 [2024-11-19 14:22:55.099944] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.623 [2024-11-19 14:22:55.099988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.623 [2024-11-19 14:22:55.099998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:56.623 [2024-11-19 14:22:55.100004] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.623 [2024-11-19 14:22:55.100010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.623 [2024-11-19 14:22:55.100039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.623 [2024-11-19 14:22:55.100046] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:56.623 [2024-11-19 14:22:55.100052] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.623 [2024-11-19 14:22:55.100058] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.623 [2024-11-19 14:22:55.100122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.623 [2024-11-19 14:22:55.100129] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:56.623 [2024-11-19 14:22:55.100137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.623 [2024-11-19 14:22:55.100143] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.623 [2024-11-19 14:22:55.100163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.623 [2024-11-19 14:22:55.100170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:56.623 [2024-11-19 14:22:55.100176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.623 [2024-11-19 14:22:55.100181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.623 [2024-11-19 14:22:55.100208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.623 [2024-11-19 14:22:55.100215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:56.623 [2024-11-19 14:22:55.100223] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.623 [2024-11-19 14:22:55.100229] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.623 [2024-11-19 14:22:55.100260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.623 [2024-11-19 14:22:55.100266] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:56.623 [2024-11-19 14:22:55.100272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.623 [2024-11-19 14:22:55.100278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.623 [2024-11-19 14:22:55.100361] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 361.733 ms, result 0 00:21:57.566 00:21:57.566 00:21:57.566 14:22:55 -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:59.484 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:59.484 14:22:57 -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:21:59.484 14:22:57 -- ftl/restore.sh@85 -- # restore_kill 00:21:59.484 14:22:57 -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:59.484 14:22:58 -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:59.484 14:22:58 -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:59.484 14:22:58 -- ftl/restore.sh@32 -- # killprocess 73040 00:21:59.484 14:22:58 -- common/autotest_common.sh@936 -- # '[' -z 73040 ']' 00:21:59.484 Process with pid 73040 is not found 00:21:59.484 14:22:58 -- common/autotest_common.sh@940 -- # kill -0 73040 00:21:59.484 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (73040) - No such process 00:21:59.484 14:22:58 -- common/autotest_common.sh@963 -- # echo 'Process with pid 73040 is not found' 00:21:59.484 14:22:58 -- ftl/restore.sh@33 -- # remove_shm 00:21:59.484 Remove shared memory files 00:21:59.484 14:22:58 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:59.484 14:22:58 -- ftl/common.sh@205 -- # rm -f rm -f 00:21:59.484 14:22:58 -- ftl/common.sh@206 -- # rm -f rm -f 00:21:59.746 14:22:58 -- ftl/common.sh@207 -- # rm -f rm -f 00:21:59.746 14:22:58 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:59.746 14:22:58 -- ftl/common.sh@209 -- # rm -f rm -f 00:21:59.746 ************************************ 00:21:59.746 END TEST ftl_restore 00:21:59.746 00:21:59.746 real 4m39.054s 00:21:59.746 user 4m26.292s 00:21:59.746 sys 0m12.269s 00:21:59.746 14:22:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:59.746 14:22:58 -- common/autotest_common.sh@10 -- # set +x 00:21:59.746 ************************************ 00:21:59.746 14:22:58 -- ftl/ftl.sh@78 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:21:59.746 14:22:58 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:21:59.746 14:22:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:59.746 14:22:58 -- common/autotest_common.sh@10 -- # set +x 00:21:59.746 ************************************ 00:21:59.746 START TEST ftl_dirty_shutdown 00:21:59.746 ************************************ 00:21:59.746 14:22:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:21:59.746 * Looking for test storage... 00:21:59.746 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:59.746 14:22:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:59.746 14:22:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:59.746 14:22:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:59.746 14:22:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:59.746 14:22:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:59.746 14:22:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:59.746 14:22:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:59.746 14:22:58 -- scripts/common.sh@335 -- # IFS=.-: 00:21:59.746 14:22:58 -- scripts/common.sh@335 -- # read -ra ver1 00:21:59.746 14:22:58 -- scripts/common.sh@336 -- # IFS=.-: 00:21:59.746 14:22:58 -- scripts/common.sh@336 -- # read -ra ver2 00:21:59.746 14:22:58 -- scripts/common.sh@337 -- # local 'op=<' 00:21:59.746 14:22:58 -- scripts/common.sh@339 -- # ver1_l=2 00:21:59.746 14:22:58 -- scripts/common.sh@340 -- # ver2_l=1 00:21:59.746 14:22:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:59.746 14:22:58 -- scripts/common.sh@343 -- # case "$op" in 00:21:59.746 14:22:58 -- scripts/common.sh@344 -- # : 1 00:21:59.746 14:22:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:59.746 14:22:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:59.746 14:22:58 -- scripts/common.sh@364 -- # decimal 1 00:21:59.746 14:22:58 -- scripts/common.sh@352 -- # local d=1 00:21:59.746 14:22:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:59.746 14:22:58 -- scripts/common.sh@354 -- # echo 1 00:21:59.746 14:22:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:59.746 14:22:58 -- scripts/common.sh@365 -- # decimal 2 00:21:59.746 14:22:58 -- scripts/common.sh@352 -- # local d=2 00:21:59.746 14:22:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:59.746 14:22:58 -- scripts/common.sh@354 -- # echo 2 00:21:59.746 14:22:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:59.746 14:22:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:59.746 14:22:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:59.746 14:22:58 -- scripts/common.sh@367 -- # return 0 00:21:59.746 14:22:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:59.746 14:22:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:59.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.746 --rc genhtml_branch_coverage=1 00:21:59.746 --rc genhtml_function_coverage=1 00:21:59.746 --rc genhtml_legend=1 00:21:59.746 --rc geninfo_all_blocks=1 00:21:59.746 --rc geninfo_unexecuted_blocks=1 00:21:59.746 00:21:59.746 ' 00:21:59.746 14:22:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:59.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.746 --rc genhtml_branch_coverage=1 00:21:59.746 --rc genhtml_function_coverage=1 00:21:59.746 --rc genhtml_legend=1 00:21:59.746 --rc geninfo_all_blocks=1 00:21:59.746 --rc geninfo_unexecuted_blocks=1 00:21:59.746 00:21:59.746 ' 00:21:59.746 14:22:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:59.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.746 --rc genhtml_branch_coverage=1 00:21:59.746 --rc genhtml_function_coverage=1 00:21:59.746 --rc genhtml_legend=1 00:21:59.746 --rc geninfo_all_blocks=1 00:21:59.746 --rc geninfo_unexecuted_blocks=1 00:21:59.746 00:21:59.746 ' 00:21:59.746 14:22:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:59.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.746 --rc genhtml_branch_coverage=1 00:21:59.746 --rc genhtml_function_coverage=1 00:21:59.746 --rc genhtml_legend=1 00:21:59.746 --rc geninfo_all_blocks=1 00:21:59.746 --rc geninfo_unexecuted_blocks=1 00:21:59.746 00:21:59.746 ' 00:21:59.746 14:22:58 -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:59.746 14:22:58 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:21:59.746 14:22:58 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:59.746 14:22:58 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:59.747 14:22:58 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:59.747 14:22:58 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:59.747 14:22:58 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:59.747 14:22:58 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:59.747 14:22:58 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:59.747 14:22:58 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:59.747 14:22:58 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:59.747 14:22:58 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:59.747 14:22:58 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:59.747 14:22:58 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:59.747 14:22:58 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:59.747 14:22:58 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:59.747 14:22:58 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:59.747 14:22:58 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:59.747 14:22:58 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:59.747 14:22:58 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:59.747 14:22:58 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:59.747 14:22:58 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:59.747 14:22:58 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:59.747 14:22:58 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:59.747 14:22:58 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:59.747 14:22:58 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:59.747 14:22:58 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:59.747 14:22:58 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:59.747 14:22:58 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:59.747 14:22:58 -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:59.747 14:22:58 -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:59.747 14:22:58 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:59.747 14:22:58 -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:21:59.747 14:22:58 -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:06.0 00:21:59.747 14:22:58 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:59.747 14:22:58 -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:21:59.747 14:22:58 -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:07.0 00:21:59.747 14:22:58 -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:21:59.747 14:22:58 -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:21:59.747 14:22:58 -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:21:59.747 14:22:58 -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:21:59.747 14:22:58 -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:59.747 14:22:58 -- ftl/dirty_shutdown.sh@45 -- # svcpid=76024 00:21:59.747 14:22:58 -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 76024 00:21:59.747 14:22:58 -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:21:59.747 14:22:58 -- common/autotest_common.sh@829 -- # '[' -z 76024 ']' 00:21:59.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.747 14:22:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.747 14:22:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:59.747 14:22:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.747 14:22:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:59.747 14:22:58 -- common/autotest_common.sh@10 -- # set +x 00:22:00.008 [2024-11-19 14:22:58.358540] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:00.008 [2024-11-19 14:22:58.358660] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76024 ] 00:22:00.008 [2024-11-19 14:22:58.504113] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.269 [2024-11-19 14:22:58.722350] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:00.270 [2024-11-19 14:22:58.722573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.656 14:22:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:01.656 14:22:59 -- common/autotest_common.sh@862 -- # return 0 00:22:01.656 14:22:59 -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:22:01.656 14:22:59 -- ftl/common.sh@54 -- # local name=nvme0 00:22:01.656 14:22:59 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:22:01.656 14:22:59 -- ftl/common.sh@56 -- # local size=103424 00:22:01.656 14:22:59 -- ftl/common.sh@59 -- # local base_bdev 00:22:01.656 14:22:59 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:22:01.656 14:23:00 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:01.656 14:23:00 -- ftl/common.sh@62 -- # local base_size 00:22:01.656 14:23:00 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:01.656 14:23:00 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:22:01.656 14:23:00 -- common/autotest_common.sh@1368 -- # local bdev_info 00:22:01.656 14:23:00 -- common/autotest_common.sh@1369 -- # local bs 00:22:01.656 14:23:00 -- common/autotest_common.sh@1370 -- # local nb 00:22:01.656 14:23:00 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:01.917 14:23:00 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:22:01.917 { 00:22:01.917 "name": "nvme0n1", 00:22:01.917 "aliases": [ 00:22:01.917 "22d9e450-d99b-4219-be8b-09fcc0129fe5" 00:22:01.917 ], 00:22:01.917 "product_name": "NVMe disk", 00:22:01.917 "block_size": 4096, 00:22:01.917 "num_blocks": 1310720, 00:22:01.917 "uuid": "22d9e450-d99b-4219-be8b-09fcc0129fe5", 00:22:01.917 "assigned_rate_limits": { 00:22:01.917 "rw_ios_per_sec": 0, 00:22:01.917 "rw_mbytes_per_sec": 0, 00:22:01.917 "r_mbytes_per_sec": 0, 00:22:01.917 "w_mbytes_per_sec": 0 00:22:01.917 }, 00:22:01.917 "claimed": true, 00:22:01.917 "claim_type": "read_many_write_one", 00:22:01.917 "zoned": false, 00:22:01.917 "supported_io_types": { 00:22:01.917 "read": true, 00:22:01.917 "write": true, 00:22:01.917 "unmap": true, 00:22:01.917 "write_zeroes": true, 00:22:01.917 "flush": true, 00:22:01.917 "reset": true, 00:22:01.917 "compare": true, 00:22:01.917 "compare_and_write": false, 00:22:01.917 "abort": true, 00:22:01.917 "nvme_admin": true, 00:22:01.917 "nvme_io": true 00:22:01.917 }, 00:22:01.917 "driver_specific": { 00:22:01.917 "nvme": [ 00:22:01.917 { 00:22:01.917 "pci_address": "0000:00:07.0", 00:22:01.917 "trid": { 00:22:01.917 "trtype": "PCIe", 00:22:01.917 "traddr": "0000:00:07.0" 00:22:01.917 }, 00:22:01.917 "ctrlr_data": { 00:22:01.917 "cntlid": 0, 00:22:01.917 "vendor_id": "0x1b36", 00:22:01.917 "model_number": "QEMU NVMe Ctrl", 00:22:01.917 "serial_number": "12341", 00:22:01.917 "firmware_revision": "8.0.0", 00:22:01.917 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:01.917 "oacs": { 00:22:01.917 "security": 0, 00:22:01.917 "format": 1, 00:22:01.917 "firmware": 0, 00:22:01.917 "ns_manage": 1 00:22:01.917 }, 00:22:01.917 "multi_ctrlr": false, 00:22:01.917 "ana_reporting": false 00:22:01.917 }, 00:22:01.917 "vs": { 00:22:01.917 "nvme_version": "1.4" 00:22:01.918 }, 00:22:01.918 "ns_data": { 00:22:01.918 "id": 1, 00:22:01.918 "can_share": false 00:22:01.918 } 00:22:01.918 } 00:22:01.918 ], 00:22:01.918 "mp_policy": "active_passive" 00:22:01.918 } 00:22:01.918 } 00:22:01.918 ]' 00:22:01.918 14:23:00 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:22:01.918 14:23:00 -- common/autotest_common.sh@1372 -- # bs=4096 00:22:01.918 14:23:00 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:22:01.918 14:23:00 -- common/autotest_common.sh@1373 -- # nb=1310720 00:22:01.918 14:23:00 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:22:01.918 14:23:00 -- common/autotest_common.sh@1377 -- # echo 5120 00:22:01.918 14:23:00 -- ftl/common.sh@63 -- # base_size=5120 00:22:01.918 14:23:00 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:01.918 14:23:00 -- ftl/common.sh@67 -- # clear_lvols 00:22:01.918 14:23:00 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:01.918 14:23:00 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:02.179 14:23:00 -- ftl/common.sh@28 -- # stores=8fb6f71e-5a68-4f25-ab30-cb6213984595 00:22:02.179 14:23:00 -- ftl/common.sh@29 -- # for lvs in $stores 00:22:02.179 14:23:00 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8fb6f71e-5a68-4f25-ab30-cb6213984595 00:22:02.441 14:23:00 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:02.441 14:23:00 -- ftl/common.sh@68 -- # lvs=faae1519-34d0-43f5-b460-d8730dfb7b41 00:22:02.441 14:23:00 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u faae1519-34d0-43f5-b460-d8730dfb7b41 00:22:02.702 14:23:01 -- ftl/dirty_shutdown.sh@49 -- # split_bdev=c20f919c-eaf1-4ad0-804b-03c446c9b4bf 00:22:02.702 14:23:01 -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:06.0 ']' 00:22:02.702 14:23:01 -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:06.0 c20f919c-eaf1-4ad0-804b-03c446c9b4bf 00:22:02.702 14:23:01 -- ftl/common.sh@35 -- # local name=nvc0 00:22:02.702 14:23:01 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:22:02.702 14:23:01 -- ftl/common.sh@37 -- # local base_bdev=c20f919c-eaf1-4ad0-804b-03c446c9b4bf 00:22:02.702 14:23:01 -- ftl/common.sh@38 -- # local cache_size= 00:22:02.702 14:23:01 -- ftl/common.sh@41 -- # get_bdev_size c20f919c-eaf1-4ad0-804b-03c446c9b4bf 00:22:02.702 14:23:01 -- common/autotest_common.sh@1367 -- # local bdev_name=c20f919c-eaf1-4ad0-804b-03c446c9b4bf 00:22:02.702 14:23:01 -- common/autotest_common.sh@1368 -- # local bdev_info 00:22:02.702 14:23:01 -- common/autotest_common.sh@1369 -- # local bs 00:22:02.703 14:23:01 -- common/autotest_common.sh@1370 -- # local nb 00:22:02.703 14:23:01 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c20f919c-eaf1-4ad0-804b-03c446c9b4bf 00:22:02.963 14:23:01 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:22:02.964 { 00:22:02.964 "name": "c20f919c-eaf1-4ad0-804b-03c446c9b4bf", 00:22:02.964 "aliases": [ 00:22:02.964 "lvs/nvme0n1p0" 00:22:02.964 ], 00:22:02.964 "product_name": "Logical Volume", 00:22:02.964 "block_size": 4096, 00:22:02.964 "num_blocks": 26476544, 00:22:02.964 "uuid": "c20f919c-eaf1-4ad0-804b-03c446c9b4bf", 00:22:02.964 "assigned_rate_limits": { 00:22:02.964 "rw_ios_per_sec": 0, 00:22:02.964 "rw_mbytes_per_sec": 0, 00:22:02.964 "r_mbytes_per_sec": 0, 00:22:02.964 "w_mbytes_per_sec": 0 00:22:02.964 }, 00:22:02.964 "claimed": false, 00:22:02.964 "zoned": false, 00:22:02.964 "supported_io_types": { 00:22:02.964 "read": true, 00:22:02.964 "write": true, 00:22:02.964 "unmap": true, 00:22:02.964 "write_zeroes": true, 00:22:02.964 "flush": false, 00:22:02.964 "reset": true, 00:22:02.964 "compare": false, 00:22:02.964 "compare_and_write": false, 00:22:02.964 "abort": false, 00:22:02.964 "nvme_admin": false, 00:22:02.964 "nvme_io": false 00:22:02.964 }, 00:22:02.964 "driver_specific": { 00:22:02.964 "lvol": { 00:22:02.964 "lvol_store_uuid": "faae1519-34d0-43f5-b460-d8730dfb7b41", 00:22:02.964 "base_bdev": "nvme0n1", 00:22:02.964 "thin_provision": true, 00:22:02.964 "snapshot": false, 00:22:02.964 "clone": false, 00:22:02.964 "esnap_clone": false 00:22:02.964 } 00:22:02.964 } 00:22:02.964 } 00:22:02.964 ]' 00:22:02.964 14:23:01 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:22:02.964 14:23:01 -- common/autotest_common.sh@1372 -- # bs=4096 00:22:02.964 14:23:01 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:22:02.964 14:23:01 -- common/autotest_common.sh@1373 -- # nb=26476544 00:22:02.964 14:23:01 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:22:02.964 14:23:01 -- common/autotest_common.sh@1377 -- # echo 103424 00:22:02.964 14:23:01 -- ftl/common.sh@41 -- # local base_size=5171 00:22:02.964 14:23:01 -- ftl/common.sh@44 -- # local nvc_bdev 00:22:02.964 14:23:01 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:22:03.225 14:23:01 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:03.225 14:23:01 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:03.225 14:23:01 -- ftl/common.sh@48 -- # get_bdev_size c20f919c-eaf1-4ad0-804b-03c446c9b4bf 00:22:03.225 14:23:01 -- common/autotest_common.sh@1367 -- # local bdev_name=c20f919c-eaf1-4ad0-804b-03c446c9b4bf 00:22:03.225 14:23:01 -- common/autotest_common.sh@1368 -- # local bdev_info 00:22:03.225 14:23:01 -- common/autotest_common.sh@1369 -- # local bs 00:22:03.225 14:23:01 -- common/autotest_common.sh@1370 -- # local nb 00:22:03.225 14:23:01 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c20f919c-eaf1-4ad0-804b-03c446c9b4bf 00:22:03.487 14:23:01 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:22:03.487 { 00:22:03.487 "name": "c20f919c-eaf1-4ad0-804b-03c446c9b4bf", 00:22:03.487 "aliases": [ 00:22:03.487 "lvs/nvme0n1p0" 00:22:03.487 ], 00:22:03.487 "product_name": "Logical Volume", 00:22:03.487 "block_size": 4096, 00:22:03.487 "num_blocks": 26476544, 00:22:03.487 "uuid": "c20f919c-eaf1-4ad0-804b-03c446c9b4bf", 00:22:03.487 "assigned_rate_limits": { 00:22:03.487 "rw_ios_per_sec": 0, 00:22:03.487 "rw_mbytes_per_sec": 0, 00:22:03.487 "r_mbytes_per_sec": 0, 00:22:03.487 "w_mbytes_per_sec": 0 00:22:03.487 }, 00:22:03.487 "claimed": false, 00:22:03.487 "zoned": false, 00:22:03.487 "supported_io_types": { 00:22:03.487 "read": true, 00:22:03.487 "write": true, 00:22:03.487 "unmap": true, 00:22:03.487 "write_zeroes": true, 00:22:03.487 "flush": false, 00:22:03.487 "reset": true, 00:22:03.487 "compare": false, 00:22:03.487 "compare_and_write": false, 00:22:03.487 "abort": false, 00:22:03.487 "nvme_admin": false, 00:22:03.487 "nvme_io": false 00:22:03.487 }, 00:22:03.487 "driver_specific": { 00:22:03.487 "lvol": { 00:22:03.487 "lvol_store_uuid": "faae1519-34d0-43f5-b460-d8730dfb7b41", 00:22:03.487 "base_bdev": "nvme0n1", 00:22:03.487 "thin_provision": true, 00:22:03.487 "snapshot": false, 00:22:03.487 "clone": false, 00:22:03.487 "esnap_clone": false 00:22:03.487 } 00:22:03.487 } 00:22:03.487 } 00:22:03.487 ]' 00:22:03.487 14:23:01 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:22:03.487 14:23:01 -- common/autotest_common.sh@1372 -- # bs=4096 00:22:03.487 14:23:01 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:22:03.487 14:23:01 -- common/autotest_common.sh@1373 -- # nb=26476544 00:22:03.487 14:23:01 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:22:03.487 14:23:01 -- common/autotest_common.sh@1377 -- # echo 103424 00:22:03.487 14:23:01 -- ftl/common.sh@48 -- # cache_size=5171 00:22:03.487 14:23:01 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:03.749 14:23:02 -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:22:03.749 14:23:02 -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size c20f919c-eaf1-4ad0-804b-03c446c9b4bf 00:22:03.749 14:23:02 -- common/autotest_common.sh@1367 -- # local bdev_name=c20f919c-eaf1-4ad0-804b-03c446c9b4bf 00:22:03.749 14:23:02 -- common/autotest_common.sh@1368 -- # local bdev_info 00:22:03.749 14:23:02 -- common/autotest_common.sh@1369 -- # local bs 00:22:03.749 14:23:02 -- common/autotest_common.sh@1370 -- # local nb 00:22:03.749 14:23:02 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c20f919c-eaf1-4ad0-804b-03c446c9b4bf 00:22:03.749 14:23:02 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:22:03.749 { 00:22:03.749 "name": "c20f919c-eaf1-4ad0-804b-03c446c9b4bf", 00:22:03.749 "aliases": [ 00:22:03.749 "lvs/nvme0n1p0" 00:22:03.749 ], 00:22:03.749 "product_name": "Logical Volume", 00:22:03.749 "block_size": 4096, 00:22:03.749 "num_blocks": 26476544, 00:22:03.749 "uuid": "c20f919c-eaf1-4ad0-804b-03c446c9b4bf", 00:22:03.749 "assigned_rate_limits": { 00:22:03.749 "rw_ios_per_sec": 0, 00:22:03.749 "rw_mbytes_per_sec": 0, 00:22:03.749 "r_mbytes_per_sec": 0, 00:22:03.749 "w_mbytes_per_sec": 0 00:22:03.749 }, 00:22:03.749 "claimed": false, 00:22:03.749 "zoned": false, 00:22:03.749 "supported_io_types": { 00:22:03.749 "read": true, 00:22:03.749 "write": true, 00:22:03.749 "unmap": true, 00:22:03.749 "write_zeroes": true, 00:22:03.749 "flush": false, 00:22:03.749 "reset": true, 00:22:03.749 "compare": false, 00:22:03.749 "compare_and_write": false, 00:22:03.749 "abort": false, 00:22:03.749 "nvme_admin": false, 00:22:03.749 "nvme_io": false 00:22:03.749 }, 00:22:03.749 "driver_specific": { 00:22:03.749 "lvol": { 00:22:03.749 "lvol_store_uuid": "faae1519-34d0-43f5-b460-d8730dfb7b41", 00:22:03.749 "base_bdev": "nvme0n1", 00:22:03.749 "thin_provision": true, 00:22:03.749 "snapshot": false, 00:22:03.749 "clone": false, 00:22:03.749 "esnap_clone": false 00:22:03.749 } 00:22:03.749 } 00:22:03.749 } 00:22:03.749 ]' 00:22:03.749 14:23:02 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:22:04.011 14:23:02 -- common/autotest_common.sh@1372 -- # bs=4096 00:22:04.011 14:23:02 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:22:04.011 14:23:02 -- common/autotest_common.sh@1373 -- # nb=26476544 00:22:04.011 14:23:02 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:22:04.011 14:23:02 -- common/autotest_common.sh@1377 -- # echo 103424 00:22:04.011 14:23:02 -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:22:04.011 14:23:02 -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d c20f919c-eaf1-4ad0-804b-03c446c9b4bf --l2p_dram_limit 10' 00:22:04.011 14:23:02 -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:22:04.012 14:23:02 -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:06.0 ']' 00:22:04.012 14:23:02 -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:04.012 14:23:02 -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c20f919c-eaf1-4ad0-804b-03c446c9b4bf --l2p_dram_limit 10 -c nvc0n1p0 00:22:04.012 [2024-11-19 14:23:02.541400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.012 [2024-11-19 14:23:02.541435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:04.012 [2024-11-19 14:23:02.541447] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:04.012 [2024-11-19 14:23:02.541455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.012 [2024-11-19 14:23:02.541491] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.012 [2024-11-19 14:23:02.541498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:04.012 [2024-11-19 14:23:02.541506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:22:04.012 [2024-11-19 14:23:02.541512] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.012 [2024-11-19 14:23:02.541528] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:04.012 [2024-11-19 14:23:02.542097] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:04.012 [2024-11-19 14:23:02.542113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.012 [2024-11-19 14:23:02.542119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:04.012 [2024-11-19 14:23:02.542126] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:22:04.012 [2024-11-19 14:23:02.542132] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.012 [2024-11-19 14:23:02.542157] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID dda6fd23-b75b-45b2-8b0e-79979a296360 00:22:04.012 [2024-11-19 14:23:02.543076] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.012 [2024-11-19 14:23:02.543093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:04.012 [2024-11-19 14:23:02.543101] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:04.012 [2024-11-19 14:23:02.543109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.012 [2024-11-19 14:23:02.547735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.012 [2024-11-19 14:23:02.547760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:04.012 [2024-11-19 14:23:02.547767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.572 ms 00:22:04.012 [2024-11-19 14:23:02.547774] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.012 [2024-11-19 14:23:02.547838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.012 [2024-11-19 14:23:02.547846] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:04.012 [2024-11-19 14:23:02.547852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:22:04.012 [2024-11-19 14:23:02.547863] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.012 [2024-11-19 14:23:02.547899] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.012 [2024-11-19 14:23:02.547909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:04.012 [2024-11-19 14:23:02.547915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:04.012 [2024-11-19 14:23:02.547922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.012 [2024-11-19 14:23:02.547940] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:04.012 [2024-11-19 14:23:02.550809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.012 [2024-11-19 14:23:02.550829] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:04.012 [2024-11-19 14:23:02.550838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.873 ms 00:22:04.012 [2024-11-19 14:23:02.550844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.012 [2024-11-19 14:23:02.550871] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.012 [2024-11-19 14:23:02.550885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:04.012 [2024-11-19 14:23:02.550893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:04.012 [2024-11-19 14:23:02.550898] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.012 [2024-11-19 14:23:02.550918] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:04.012 [2024-11-19 14:23:02.551005] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:22:04.012 [2024-11-19 14:23:02.551018] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:04.012 [2024-11-19 14:23:02.551026] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:22:04.012 [2024-11-19 14:23:02.551035] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:04.012 [2024-11-19 14:23:02.551042] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:04.012 [2024-11-19 14:23:02.551051] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:04.012 [2024-11-19 14:23:02.551062] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:04.012 [2024-11-19 14:23:02.551069] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:22:04.012 [2024-11-19 14:23:02.551075] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:22:04.012 [2024-11-19 14:23:02.551083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.012 [2024-11-19 14:23:02.551089] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:04.012 [2024-11-19 14:23:02.551096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:22:04.012 [2024-11-19 14:23:02.551101] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.012 [2024-11-19 14:23:02.551150] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.012 [2024-11-19 14:23:02.551156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:04.012 [2024-11-19 14:23:02.551163] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:04.012 [2024-11-19 14:23:02.551170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.012 [2024-11-19 14:23:02.551226] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:04.012 [2024-11-19 14:23:02.551233] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:04.012 [2024-11-19 14:23:02.551240] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:04.012 [2024-11-19 14:23:02.551246] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:04.012 [2024-11-19 14:23:02.551253] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:04.012 [2024-11-19 14:23:02.551258] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:04.012 [2024-11-19 14:23:02.551264] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:04.012 [2024-11-19 14:23:02.551269] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:04.012 [2024-11-19 14:23:02.551276] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:04.012 [2024-11-19 14:23:02.551281] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:04.012 [2024-11-19 14:23:02.551287] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:04.012 [2024-11-19 14:23:02.551293] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:04.012 [2024-11-19 14:23:02.551301] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:04.012 [2024-11-19 14:23:02.551306] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:04.012 [2024-11-19 14:23:02.551316] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:22:04.012 [2024-11-19 14:23:02.551321] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:04.012 [2024-11-19 14:23:02.551328] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:04.012 [2024-11-19 14:23:02.551333] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:22:04.012 [2024-11-19 14:23:02.551339] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:04.012 [2024-11-19 14:23:02.551344] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:22:04.012 [2024-11-19 14:23:02.551351] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:22:04.012 [2024-11-19 14:23:02.551355] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:22:04.012 [2024-11-19 14:23:02.551362] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:04.012 [2024-11-19 14:23:02.551367] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:04.012 [2024-11-19 14:23:02.551373] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:04.012 [2024-11-19 14:23:02.551378] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:04.012 [2024-11-19 14:23:02.551384] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:22:04.012 [2024-11-19 14:23:02.551389] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:04.012 [2024-11-19 14:23:02.551395] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:04.012 [2024-11-19 14:23:02.551400] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:04.012 [2024-11-19 14:23:02.551406] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:04.012 [2024-11-19 14:23:02.551410] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:04.012 [2024-11-19 14:23:02.551426] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:22:04.012 [2024-11-19 14:23:02.551431] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:04.012 [2024-11-19 14:23:02.551437] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:04.012 [2024-11-19 14:23:02.551442] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:04.012 [2024-11-19 14:23:02.551448] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:04.012 [2024-11-19 14:23:02.551453] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:04.012 [2024-11-19 14:23:02.551460] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:22:04.012 [2024-11-19 14:23:02.551465] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:04.012 [2024-11-19 14:23:02.551471] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:04.012 [2024-11-19 14:23:02.551477] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:04.012 [2024-11-19 14:23:02.551483] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:04.013 [2024-11-19 14:23:02.551489] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:04.013 [2024-11-19 14:23:02.551498] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:04.013 [2024-11-19 14:23:02.551503] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:04.013 [2024-11-19 14:23:02.551509] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:04.013 [2024-11-19 14:23:02.551515] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:04.013 [2024-11-19 14:23:02.551522] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:04.013 [2024-11-19 14:23:02.551527] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:04.013 [2024-11-19 14:23:02.551535] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:04.013 [2024-11-19 14:23:02.551543] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:04.013 [2024-11-19 14:23:02.551550] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:04.013 [2024-11-19 14:23:02.551556] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:22:04.013 [2024-11-19 14:23:02.551562] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:22:04.013 [2024-11-19 14:23:02.551568] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:22:04.013 [2024-11-19 14:23:02.551575] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:22:04.013 [2024-11-19 14:23:02.551580] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:22:04.013 [2024-11-19 14:23:02.551587] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:22:04.013 [2024-11-19 14:23:02.551592] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:22:04.013 [2024-11-19 14:23:02.551598] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:22:04.013 [2024-11-19 14:23:02.551604] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:22:04.013 [2024-11-19 14:23:02.551611] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:22:04.013 [2024-11-19 14:23:02.551616] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:22:04.013 [2024-11-19 14:23:02.551626] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:22:04.013 [2024-11-19 14:23:02.551631] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:04.013 [2024-11-19 14:23:02.551639] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:04.013 [2024-11-19 14:23:02.551645] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:04.013 [2024-11-19 14:23:02.551652] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:04.013 [2024-11-19 14:23:02.551657] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:04.013 [2024-11-19 14:23:02.551664] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:04.013 [2024-11-19 14:23:02.551669] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.013 [2024-11-19 14:23:02.551676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:04.013 [2024-11-19 14:23:02.551682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.479 ms 00:22:04.013 [2024-11-19 14:23:02.551688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.013 [2024-11-19 14:23:02.563805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.013 [2024-11-19 14:23:02.563833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:04.013 [2024-11-19 14:23:02.563841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.086 ms 00:22:04.013 [2024-11-19 14:23:02.563849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.013 [2024-11-19 14:23:02.563925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.013 [2024-11-19 14:23:02.563935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:04.013 [2024-11-19 14:23:02.563943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:22:04.013 [2024-11-19 14:23:02.563950] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.275 [2024-11-19 14:23:02.587728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.275 [2024-11-19 14:23:02.587752] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:04.275 [2024-11-19 14:23:02.587760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.746 ms 00:22:04.275 [2024-11-19 14:23:02.587768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.275 [2024-11-19 14:23:02.587791] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.275 [2024-11-19 14:23:02.587799] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:04.275 [2024-11-19 14:23:02.587806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:22:04.275 [2024-11-19 14:23:02.587814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.275 [2024-11-19 14:23:02.588136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.275 [2024-11-19 14:23:02.588155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:04.275 [2024-11-19 14:23:02.588162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:22:04.275 [2024-11-19 14:23:02.588169] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.275 [2024-11-19 14:23:02.588254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.275 [2024-11-19 14:23:02.588264] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:04.275 [2024-11-19 14:23:02.588270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:04.275 [2024-11-19 14:23:02.588277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.275 [2024-11-19 14:23:02.600132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.275 [2024-11-19 14:23:02.600155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:04.275 [2024-11-19 14:23:02.600163] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.842 ms 00:22:04.275 [2024-11-19 14:23:02.600170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.275 [2024-11-19 14:23:02.609064] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:04.275 [2024-11-19 14:23:02.611317] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.275 [2024-11-19 14:23:02.611336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:04.275 [2024-11-19 14:23:02.611346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.091 ms 00:22:04.275 [2024-11-19 14:23:02.611352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.275 [2024-11-19 14:23:02.697598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.275 [2024-11-19 14:23:02.697651] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:04.275 [2024-11-19 14:23:02.697666] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.222 ms 00:22:04.275 [2024-11-19 14:23:02.697675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.275 [2024-11-19 14:23:02.697716] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:22:04.275 [2024-11-19 14:23:02.697728] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:22:08.484 [2024-11-19 14:23:06.727354] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.484 [2024-11-19 14:23:06.727465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:08.484 [2024-11-19 14:23:06.727492] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4029.609 ms 00:22:08.484 [2024-11-19 14:23:06.727504] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.484 [2024-11-19 14:23:06.727763] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.484 [2024-11-19 14:23:06.727780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:08.484 [2024-11-19 14:23:06.727799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:22:08.484 [2024-11-19 14:23:06.727810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.484 [2024-11-19 14:23:06.755939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.484 [2024-11-19 14:23:06.756012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:08.484 [2024-11-19 14:23:06.756031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.063 ms 00:22:08.484 [2024-11-19 14:23:06.756041] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.484 [2024-11-19 14:23:06.782254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.484 [2024-11-19 14:23:06.782298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:08.484 [2024-11-19 14:23:06.782319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.148 ms 00:22:08.484 [2024-11-19 14:23:06.782327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.484 [2024-11-19 14:23:06.782705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.484 [2024-11-19 14:23:06.782719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:08.484 [2024-11-19 14:23:06.782732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:22:08.484 [2024-11-19 14:23:06.782741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.484 [2024-11-19 14:23:06.859832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.484 [2024-11-19 14:23:06.859905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:08.484 [2024-11-19 14:23:06.859923] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.026 ms 00:22:08.484 [2024-11-19 14:23:06.859933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.484 [2024-11-19 14:23:06.889598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.484 [2024-11-19 14:23:06.889647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:08.484 [2024-11-19 14:23:06.889664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.602 ms 00:22:08.484 [2024-11-19 14:23:06.889673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.484 [2024-11-19 14:23:06.891375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.484 [2024-11-19 14:23:06.891438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:22:08.485 [2024-11-19 14:23:06.891456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.642 ms 00:22:08.485 [2024-11-19 14:23:06.891465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.485 [2024-11-19 14:23:06.925126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.485 [2024-11-19 14:23:06.925193] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:08.485 [2024-11-19 14:23:06.925213] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.588 ms 00:22:08.485 [2024-11-19 14:23:06.925222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.485 [2024-11-19 14:23:06.925297] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.485 [2024-11-19 14:23:06.925307] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:08.485 [2024-11-19 14:23:06.925319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:08.485 [2024-11-19 14:23:06.925328] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.485 [2024-11-19 14:23:06.925450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.485 [2024-11-19 14:23:06.925461] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:08.485 [2024-11-19 14:23:06.925472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:22:08.485 [2024-11-19 14:23:06.925480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.485 [2024-11-19 14:23:06.926714] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4384.768 ms, result 0 00:22:08.485 { 00:22:08.485 "name": "ftl0", 00:22:08.485 "uuid": "dda6fd23-b75b-45b2-8b0e-79979a296360" 00:22:08.485 } 00:22:08.485 14:23:06 -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:22:08.485 14:23:06 -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:08.750 14:23:07 -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:22:08.750 14:23:07 -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:22:08.750 14:23:07 -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:22:09.012 /dev/nbd0 00:22:09.012 14:23:07 -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:22:09.012 14:23:07 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:09.012 14:23:07 -- common/autotest_common.sh@867 -- # local i 00:22:09.012 14:23:07 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:09.012 14:23:07 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:09.012 14:23:07 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:09.012 14:23:07 -- common/autotest_common.sh@871 -- # break 00:22:09.012 14:23:07 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:09.012 14:23:07 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:09.012 14:23:07 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:22:09.012 1+0 records in 00:22:09.012 1+0 records out 00:22:09.012 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396641 s, 10.3 MB/s 00:22:09.012 14:23:07 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:09.012 14:23:07 -- common/autotest_common.sh@884 -- # size=4096 00:22:09.012 14:23:07 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:09.012 14:23:07 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:09.012 14:23:07 -- common/autotest_common.sh@887 -- # return 0 00:22:09.012 14:23:07 -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:22:09.012 [2024-11-19 14:23:07.469404] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:09.012 [2024-11-19 14:23:07.469540] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76179 ] 00:22:09.273 [2024-11-19 14:23:07.622177] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.273 [2024-11-19 14:23:07.774769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:10.660  [2024-11-19T14:23:10.165Z] Copying: 259/1024 [MB] (259 MBps) [2024-11-19T14:23:11.106Z] Copying: 446/1024 [MB] (186 MBps) [2024-11-19T14:23:12.042Z] Copying: 633/1024 [MB] (187 MBps) [2024-11-19T14:23:12.609Z] Copying: 870/1024 [MB] (236 MBps) [2024-11-19T14:23:13.547Z] Copying: 1024/1024 [MB] (average 222 MBps) 00:22:14.985 00:22:14.985 14:23:13 -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:16.370 14:23:14 -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:22:16.370 [2024-11-19 14:23:14.890017] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:16.370 [2024-11-19 14:23:14.890096] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76267 ] 00:22:16.629 [2024-11-19 14:23:15.041460] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.888 [2024-11-19 14:23:15.201045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:18.261  [2024-11-19T14:23:17.390Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-19T14:23:18.764Z] Copying: 34/1024 [MB] (15 MBps) [2024-11-19T14:23:19.697Z] Copying: 51/1024 [MB] (16 MBps) [2024-11-19T14:23:20.631Z] Copying: 76/1024 [MB] (24 MBps) [2024-11-19T14:23:21.566Z] Copying: 94/1024 [MB] (18 MBps) [2024-11-19T14:23:22.502Z] Copying: 114/1024 [MB] (19 MBps) [2024-11-19T14:23:23.504Z] Copying: 133/1024 [MB] (19 MBps) [2024-11-19T14:23:24.442Z] Copying: 153/1024 [MB] (19 MBps) [2024-11-19T14:23:25.818Z] Copying: 172/1024 [MB] (19 MBps) [2024-11-19T14:23:26.753Z] Copying: 193/1024 [MB] (21 MBps) [2024-11-19T14:23:27.688Z] Copying: 216/1024 [MB] (22 MBps) [2024-11-19T14:23:28.623Z] Copying: 236/1024 [MB] (19 MBps) [2024-11-19T14:23:29.558Z] Copying: 261/1024 [MB] (25 MBps) [2024-11-19T14:23:30.493Z] Copying: 282/1024 [MB] (21 MBps) [2024-11-19T14:23:31.428Z] Copying: 310/1024 [MB] (27 MBps) [2024-11-19T14:23:32.803Z] Copying: 344/1024 [MB] (34 MBps) [2024-11-19T14:23:33.738Z] Copying: 370/1024 [MB] (25 MBps) [2024-11-19T14:23:34.673Z] Copying: 386/1024 [MB] (16 MBps) [2024-11-19T14:23:35.608Z] Copying: 417/1024 [MB] (30 MBps) [2024-11-19T14:23:36.543Z] Copying: 435/1024 [MB] (18 MBps) [2024-11-19T14:23:37.479Z] Copying: 456/1024 [MB] (20 MBps) [2024-11-19T14:23:38.413Z] Copying: 478/1024 [MB] (22 MBps) [2024-11-19T14:23:39.788Z] Copying: 501/1024 [MB] (23 MBps) [2024-11-19T14:23:40.721Z] Copying: 521/1024 [MB] (19 MBps) [2024-11-19T14:23:41.656Z] Copying: 544/1024 [MB] (23 MBps) [2024-11-19T14:23:42.592Z] Copying: 564/1024 [MB] (20 MBps) [2024-11-19T14:23:43.528Z] Copying: 598/1024 [MB] (33 MBps) [2024-11-19T14:23:44.464Z] Copying: 626/1024 [MB] (27 MBps) [2024-11-19T14:23:45.400Z] Copying: 648/1024 [MB] (22 MBps) [2024-11-19T14:23:46.775Z] Copying: 671/1024 [MB] (23 MBps) [2024-11-19T14:23:47.709Z] Copying: 695/1024 [MB] (23 MBps) [2024-11-19T14:23:48.640Z] Copying: 711/1024 [MB] (16 MBps) [2024-11-19T14:23:49.574Z] Copying: 732/1024 [MB] (21 MBps) [2024-11-19T14:23:50.508Z] Copying: 752/1024 [MB] (19 MBps) [2024-11-19T14:23:51.441Z] Copying: 778/1024 [MB] (25 MBps) [2024-11-19T14:23:52.819Z] Copying: 807/1024 [MB] (28 MBps) [2024-11-19T14:23:53.753Z] Copying: 828/1024 [MB] (20 MBps) [2024-11-19T14:23:54.712Z] Copying: 856/1024 [MB] (27 MBps) [2024-11-19T14:23:55.703Z] Copying: 879/1024 [MB] (23 MBps) [2024-11-19T14:23:56.638Z] Copying: 904/1024 [MB] (24 MBps) [2024-11-19T14:23:57.572Z] Copying: 927/1024 [MB] (23 MBps) [2024-11-19T14:23:58.506Z] Copying: 952/1024 [MB] (25 MBps) [2024-11-19T14:23:59.439Z] Copying: 974/1024 [MB] (21 MBps) [2024-11-19T14:24:00.812Z] Copying: 992/1024 [MB] (17 MBps) [2024-11-19T14:24:00.812Z] Copying: 1017/1024 [MB] (24 MBps) [2024-11-19T14:24:01.381Z] Copying: 1024/1024 [MB] (average 22 MBps) 00:23:02.819 00:23:02.819 14:24:01 -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:23:02.819 14:24:01 -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:23:03.080 14:24:01 -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:03.342 [2024-11-19 14:24:01.703029] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.342 [2024-11-19 14:24:01.703072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:03.342 [2024-11-19 14:24:01.703084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:03.342 [2024-11-19 14:24:01.703091] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.342 [2024-11-19 14:24:01.703109] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:03.342 [2024-11-19 14:24:01.705222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.342 [2024-11-19 14:24:01.705246] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:03.342 [2024-11-19 14:24:01.705256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.099 ms 00:23:03.342 [2024-11-19 14:24:01.705263] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.342 [2024-11-19 14:24:01.707194] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.342 [2024-11-19 14:24:01.707219] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:03.343 [2024-11-19 14:24:01.707233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.911 ms 00:23:03.343 [2024-11-19 14:24:01.707239] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.343 [2024-11-19 14:24:01.720021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.343 [2024-11-19 14:24:01.720048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:03.343 [2024-11-19 14:24:01.720058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.765 ms 00:23:03.343 [2024-11-19 14:24:01.720064] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.343 [2024-11-19 14:24:01.724782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.343 [2024-11-19 14:24:01.724806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:23:03.343 [2024-11-19 14:24:01.724816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.689 ms 00:23:03.343 [2024-11-19 14:24:01.724825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.343 [2024-11-19 14:24:01.743356] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.343 [2024-11-19 14:24:01.743385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:03.343 [2024-11-19 14:24:01.743396] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.465 ms 00:23:03.343 [2024-11-19 14:24:01.743402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.343 [2024-11-19 14:24:01.755767] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.343 [2024-11-19 14:24:01.755795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:03.343 [2024-11-19 14:24:01.755806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.333 ms 00:23:03.343 [2024-11-19 14:24:01.755812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.343 [2024-11-19 14:24:01.755924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.343 [2024-11-19 14:24:01.755933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:03.343 [2024-11-19 14:24:01.755942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:23:03.343 [2024-11-19 14:24:01.755948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.343 [2024-11-19 14:24:01.774062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.343 [2024-11-19 14:24:01.774087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:03.343 [2024-11-19 14:24:01.774096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.095 ms 00:23:03.343 [2024-11-19 14:24:01.774102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.343 [2024-11-19 14:24:01.791696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.343 [2024-11-19 14:24:01.791721] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:03.343 [2024-11-19 14:24:01.791730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.565 ms 00:23:03.343 [2024-11-19 14:24:01.791735] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.343 [2024-11-19 14:24:01.808937] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.343 [2024-11-19 14:24:01.808962] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:03.343 [2024-11-19 14:24:01.808971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.174 ms 00:23:03.343 [2024-11-19 14:24:01.808977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.343 [2024-11-19 14:24:01.826064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.343 [2024-11-19 14:24:01.826089] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:03.343 [2024-11-19 14:24:01.826097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.031 ms 00:23:03.343 [2024-11-19 14:24:01.826103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.343 [2024-11-19 14:24:01.826131] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:03.343 [2024-11-19 14:24:01.826141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:03.343 [2024-11-19 14:24:01.826433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:03.344 [2024-11-19 14:24:01.826833] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:03.344 [2024-11-19 14:24:01.826840] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dda6fd23-b75b-45b2-8b0e-79979a296360 00:23:03.344 [2024-11-19 14:24:01.826848] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:03.344 [2024-11-19 14:24:01.826855] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:03.344 [2024-11-19 14:24:01.826860] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:03.344 [2024-11-19 14:24:01.826866] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:03.344 [2024-11-19 14:24:01.826871] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:03.344 [2024-11-19 14:24:01.826887] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:03.344 [2024-11-19 14:24:01.826893] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:03.344 [2024-11-19 14:24:01.826899] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:03.344 [2024-11-19 14:24:01.826904] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:03.344 [2024-11-19 14:24:01.826912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.344 [2024-11-19 14:24:01.826917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:03.344 [2024-11-19 14:24:01.826925] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.782 ms 00:23:03.344 [2024-11-19 14:24:01.826930] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.344 [2024-11-19 14:24:01.836522] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.344 [2024-11-19 14:24:01.836546] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:03.344 [2024-11-19 14:24:01.836555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.567 ms 00:23:03.344 [2024-11-19 14:24:01.836561] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.344 [2024-11-19 14:24:01.836707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.344 [2024-11-19 14:24:01.836730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:03.344 [2024-11-19 14:24:01.836738] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:23:03.344 [2024-11-19 14:24:01.836744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.344 [2024-11-19 14:24:01.871492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.344 [2024-11-19 14:24:01.871521] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:03.344 [2024-11-19 14:24:01.871531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.344 [2024-11-19 14:24:01.871537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.344 [2024-11-19 14:24:01.871584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.344 [2024-11-19 14:24:01.871590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:03.344 [2024-11-19 14:24:01.871598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.344 [2024-11-19 14:24:01.871604] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.344 [2024-11-19 14:24:01.871656] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.344 [2024-11-19 14:24:01.871664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:03.344 [2024-11-19 14:24:01.871672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.344 [2024-11-19 14:24:01.871678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.344 [2024-11-19 14:24:01.871694] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.344 [2024-11-19 14:24:01.871700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:03.344 [2024-11-19 14:24:01.871707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.345 [2024-11-19 14:24:01.871712] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.606 [2024-11-19 14:24:01.929800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.606 [2024-11-19 14:24:01.929835] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:03.606 [2024-11-19 14:24:01.929845] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.606 [2024-11-19 14:24:01.929852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.606 [2024-11-19 14:24:01.952864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.606 [2024-11-19 14:24:01.952903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:03.606 [2024-11-19 14:24:01.952912] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.606 [2024-11-19 14:24:01.952918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.606 [2024-11-19 14:24:01.952968] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.606 [2024-11-19 14:24:01.952975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:03.606 [2024-11-19 14:24:01.952983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.606 [2024-11-19 14:24:01.952989] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.606 [2024-11-19 14:24:01.953025] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.606 [2024-11-19 14:24:01.953032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:03.606 [2024-11-19 14:24:01.953039] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.606 [2024-11-19 14:24:01.953045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.606 [2024-11-19 14:24:01.953117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.606 [2024-11-19 14:24:01.953126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:03.606 [2024-11-19 14:24:01.953133] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.606 [2024-11-19 14:24:01.953139] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.606 [2024-11-19 14:24:01.953165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.606 [2024-11-19 14:24:01.953171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:03.606 [2024-11-19 14:24:01.953178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.606 [2024-11-19 14:24:01.953184] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.607 [2024-11-19 14:24:01.953215] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.607 [2024-11-19 14:24:01.953223] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:03.607 [2024-11-19 14:24:01.953230] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.607 [2024-11-19 14:24:01.953236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.607 [2024-11-19 14:24:01.953270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.607 [2024-11-19 14:24:01.953279] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:03.607 [2024-11-19 14:24:01.953286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.607 [2024-11-19 14:24:01.953291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.607 [2024-11-19 14:24:01.953393] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 250.334 ms, result 0 00:23:03.607 true 00:23:03.607 14:24:01 -- ftl/dirty_shutdown.sh@83 -- # kill -9 76024 00:23:03.607 14:24:01 -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid76024 00:23:03.607 14:24:01 -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:23:03.607 [2024-11-19 14:24:02.042550] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:03.607 [2024-11-19 14:24:02.042676] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76757 ] 00:23:03.869 [2024-11-19 14:24:02.191297] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.869 [2024-11-19 14:24:02.340488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.256  [2024-11-19T14:24:04.763Z] Copying: 257/1024 [MB] (257 MBps) [2024-11-19T14:24:05.705Z] Copying: 518/1024 [MB] (261 MBps) [2024-11-19T14:24:06.647Z] Copying: 774/1024 [MB] (255 MBps) [2024-11-19T14:24:07.219Z] Copying: 1024/1024 [MB] (average 256 MBps) 00:23:08.657 00:23:08.657 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 76024 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:23:08.657 14:24:07 -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:08.657 [2024-11-19 14:24:07.187774] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:08.657 [2024-11-19 14:24:07.187906] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76816 ] 00:23:08.918 [2024-11-19 14:24:07.336304] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.918 [2024-11-19 14:24:07.471740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:09.179 [2024-11-19 14:24:07.676542] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:09.179 [2024-11-19 14:24:07.676588] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:09.179 [2024-11-19 14:24:07.736184] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:09.179 [2024-11-19 14:24:07.736511] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:09.179 [2024-11-19 14:24:07.736782] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:09.751 [2024-11-19 14:24:08.003086] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.751 [2024-11-19 14:24:08.003115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:09.751 [2024-11-19 14:24:08.003125] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:09.751 [2024-11-19 14:24:08.003131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.751 [2024-11-19 14:24:08.003164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.751 [2024-11-19 14:24:08.003171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:09.751 [2024-11-19 14:24:08.003179] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:23:09.751 [2024-11-19 14:24:08.003185] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.751 [2024-11-19 14:24:08.003197] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:09.751 [2024-11-19 14:24:08.003760] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:09.751 [2024-11-19 14:24:08.003779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.751 [2024-11-19 14:24:08.003785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:09.751 [2024-11-19 14:24:08.003791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:23:09.751 [2024-11-19 14:24:08.003797] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.751 [2024-11-19 14:24:08.004733] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:09.751 [2024-11-19 14:24:08.014537] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.751 [2024-11-19 14:24:08.014560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:09.751 [2024-11-19 14:24:08.014568] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.806 ms 00:23:09.751 [2024-11-19 14:24:08.014574] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.751 [2024-11-19 14:24:08.014613] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.751 [2024-11-19 14:24:08.014622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:09.751 [2024-11-19 14:24:08.014628] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:09.751 [2024-11-19 14:24:08.014634] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.751 [2024-11-19 14:24:08.018906] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.751 [2024-11-19 14:24:08.018925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:09.751 [2024-11-19 14:24:08.018932] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.230 ms 00:23:09.751 [2024-11-19 14:24:08.018938] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.751 [2024-11-19 14:24:08.019001] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.751 [2024-11-19 14:24:08.019007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:09.751 [2024-11-19 14:24:08.019013] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:23:09.751 [2024-11-19 14:24:08.019019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.751 [2024-11-19 14:24:08.019050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.751 [2024-11-19 14:24:08.019056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:09.751 [2024-11-19 14:24:08.019062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:09.751 [2024-11-19 14:24:08.019068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.751 [2024-11-19 14:24:08.019086] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:09.751 [2024-11-19 14:24:08.021803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.751 [2024-11-19 14:24:08.021822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:09.751 [2024-11-19 14:24:08.021828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.725 ms 00:23:09.751 [2024-11-19 14:24:08.021834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.751 [2024-11-19 14:24:08.021863] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.751 [2024-11-19 14:24:08.021869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:09.751 [2024-11-19 14:24:08.021882] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:09.751 [2024-11-19 14:24:08.021888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.751 [2024-11-19 14:24:08.021902] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:09.751 [2024-11-19 14:24:08.021916] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:23:09.751 [2024-11-19 14:24:08.021940] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:09.751 [2024-11-19 14:24:08.021952] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:23:09.751 [2024-11-19 14:24:08.022008] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:23:09.751 [2024-11-19 14:24:08.022016] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:09.751 [2024-11-19 14:24:08.022024] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:23:09.751 [2024-11-19 14:24:08.022031] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:09.751 [2024-11-19 14:24:08.022037] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:09.751 [2024-11-19 14:24:08.022043] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:09.751 [2024-11-19 14:24:08.022048] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:09.751 [2024-11-19 14:24:08.022053] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:23:09.751 [2024-11-19 14:24:08.022058] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:23:09.751 [2024-11-19 14:24:08.022065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.751 [2024-11-19 14:24:08.022070] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:09.751 [2024-11-19 14:24:08.022076] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:23:09.751 [2024-11-19 14:24:08.022081] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.751 [2024-11-19 14:24:08.022125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.751 [2024-11-19 14:24:08.022132] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:09.751 [2024-11-19 14:24:08.022137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:09.751 [2024-11-19 14:24:08.022143] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.751 [2024-11-19 14:24:08.022195] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:09.751 [2024-11-19 14:24:08.022201] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:09.751 [2024-11-19 14:24:08.022209] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:09.751 [2024-11-19 14:24:08.022215] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:09.751 [2024-11-19 14:24:08.022223] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:09.751 [2024-11-19 14:24:08.022228] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:09.751 [2024-11-19 14:24:08.022234] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:09.751 [2024-11-19 14:24:08.022239] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:09.751 [2024-11-19 14:24:08.022244] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:09.751 [2024-11-19 14:24:08.022249] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:09.751 [2024-11-19 14:24:08.022255] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:09.751 [2024-11-19 14:24:08.022260] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:09.751 [2024-11-19 14:24:08.022269] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:09.751 [2024-11-19 14:24:08.022274] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:09.751 [2024-11-19 14:24:08.022279] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:23:09.751 [2024-11-19 14:24:08.022284] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:09.751 [2024-11-19 14:24:08.022290] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:09.751 [2024-11-19 14:24:08.022295] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:23:09.751 [2024-11-19 14:24:08.022299] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:09.751 [2024-11-19 14:24:08.022304] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:23:09.751 [2024-11-19 14:24:08.022309] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:23:09.751 [2024-11-19 14:24:08.022314] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:23:09.751 [2024-11-19 14:24:08.022319] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:09.751 [2024-11-19 14:24:08.022324] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:09.751 [2024-11-19 14:24:08.022328] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:09.751 [2024-11-19 14:24:08.022333] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:09.751 [2024-11-19 14:24:08.022337] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:23:09.751 [2024-11-19 14:24:08.022342] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:09.751 [2024-11-19 14:24:08.022346] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:09.752 [2024-11-19 14:24:08.022351] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:09.752 [2024-11-19 14:24:08.022355] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:09.752 [2024-11-19 14:24:08.022360] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:09.752 [2024-11-19 14:24:08.022365] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:23:09.752 [2024-11-19 14:24:08.022369] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:09.752 [2024-11-19 14:24:08.022374] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:09.752 [2024-11-19 14:24:08.022379] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:09.752 [2024-11-19 14:24:08.022384] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:09.752 [2024-11-19 14:24:08.022389] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:09.752 [2024-11-19 14:24:08.022394] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:23:09.752 [2024-11-19 14:24:08.022399] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:09.752 [2024-11-19 14:24:08.022403] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:09.752 [2024-11-19 14:24:08.022409] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:09.752 [2024-11-19 14:24:08.022414] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:09.752 [2024-11-19 14:24:08.022420] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:09.752 [2024-11-19 14:24:08.022425] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:09.752 [2024-11-19 14:24:08.022430] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:09.752 [2024-11-19 14:24:08.022435] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:09.752 [2024-11-19 14:24:08.022440] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:09.752 [2024-11-19 14:24:08.022445] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:09.752 [2024-11-19 14:24:08.022450] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:09.752 [2024-11-19 14:24:08.022455] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:09.752 [2024-11-19 14:24:08.022463] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:09.752 [2024-11-19 14:24:08.022469] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:09.752 [2024-11-19 14:24:08.022475] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:23:09.752 [2024-11-19 14:24:08.022480] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:23:09.752 [2024-11-19 14:24:08.022485] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:23:09.752 [2024-11-19 14:24:08.022491] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:23:09.752 [2024-11-19 14:24:08.022496] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:23:09.752 [2024-11-19 14:24:08.022502] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:23:09.752 [2024-11-19 14:24:08.022507] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:23:09.752 [2024-11-19 14:24:08.022513] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:23:09.752 [2024-11-19 14:24:08.022518] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:23:09.752 [2024-11-19 14:24:08.022523] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:23:09.752 [2024-11-19 14:24:08.022529] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:23:09.752 [2024-11-19 14:24:08.022535] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:23:09.752 [2024-11-19 14:24:08.022540] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:09.752 [2024-11-19 14:24:08.022546] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:09.752 [2024-11-19 14:24:08.022554] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:09.752 [2024-11-19 14:24:08.022560] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:09.752 [2024-11-19 14:24:08.022566] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:09.752 [2024-11-19 14:24:08.022571] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:09.752 [2024-11-19 14:24:08.022577] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.752 [2024-11-19 14:24:08.022582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:09.752 [2024-11-19 14:24:08.022587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:23:09.752 [2024-11-19 14:24:08.022592] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.752 [2024-11-19 14:24:08.034396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.752 [2024-11-19 14:24:08.034419] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:09.752 [2024-11-19 14:24:08.034427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.778 ms 00:23:09.752 [2024-11-19 14:24:08.034433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.752 [2024-11-19 14:24:08.034495] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.752 [2024-11-19 14:24:08.034501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:09.752 [2024-11-19 14:24:08.034507] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:23:09.752 [2024-11-19 14:24:08.034512] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.752 [2024-11-19 14:24:08.072503] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.752 [2024-11-19 14:24:08.072530] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:09.752 [2024-11-19 14:24:08.072540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.956 ms 00:23:09.752 [2024-11-19 14:24:08.072547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.752 [2024-11-19 14:24:08.072573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.752 [2024-11-19 14:24:08.072580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:09.752 [2024-11-19 14:24:08.072588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:09.752 [2024-11-19 14:24:08.072596] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.752 [2024-11-19 14:24:08.072918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.752 [2024-11-19 14:24:08.072936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:09.752 [2024-11-19 14:24:08.072944] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:23:09.752 [2024-11-19 14:24:08.072951] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.752 [2024-11-19 14:24:08.073038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.752 [2024-11-19 14:24:08.073046] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:09.752 [2024-11-19 14:24:08.073052] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:23:09.752 [2024-11-19 14:24:08.073059] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.752 [2024-11-19 14:24:08.084028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.752 [2024-11-19 14:24:08.084048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:09.752 [2024-11-19 14:24:08.084056] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.953 ms 00:23:09.752 [2024-11-19 14:24:08.084062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.752 [2024-11-19 14:24:08.094230] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:09.752 [2024-11-19 14:24:08.094253] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:09.752 [2024-11-19 14:24:08.094261] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.752 [2024-11-19 14:24:08.094268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:09.752 [2024-11-19 14:24:08.094275] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.131 ms 00:23:09.752 [2024-11-19 14:24:08.094280] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.752 [2024-11-19 14:24:08.112898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.752 [2024-11-19 14:24:08.112930] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:09.752 [2024-11-19 14:24:08.112942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.589 ms 00:23:09.752 [2024-11-19 14:24:08.112947] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.752 [2024-11-19 14:24:08.122348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.752 [2024-11-19 14:24:08.122370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:09.752 [2024-11-19 14:24:08.122377] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.372 ms 00:23:09.752 [2024-11-19 14:24:08.122388] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.752 [2024-11-19 14:24:08.131432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.752 [2024-11-19 14:24:08.131458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:09.752 [2024-11-19 14:24:08.131465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.016 ms 00:23:09.752 [2024-11-19 14:24:08.131470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.752 [2024-11-19 14:24:08.131729] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.752 [2024-11-19 14:24:08.131738] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:09.752 [2024-11-19 14:24:08.131744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:23:09.752 [2024-11-19 14:24:08.131749] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.752 [2024-11-19 14:24:08.177886] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.752 [2024-11-19 14:24:08.177914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:09.752 [2024-11-19 14:24:08.177924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.121 ms 00:23:09.752 [2024-11-19 14:24:08.177931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.753 [2024-11-19 14:24:08.186089] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:09.753 [2024-11-19 14:24:08.187824] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.753 [2024-11-19 14:24:08.187844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:09.753 [2024-11-19 14:24:08.187854] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.857 ms 00:23:09.753 [2024-11-19 14:24:08.187861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.753 [2024-11-19 14:24:08.187916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.753 [2024-11-19 14:24:08.187924] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:09.753 [2024-11-19 14:24:08.187932] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:09.753 [2024-11-19 14:24:08.187938] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.753 [2024-11-19 14:24:08.187977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.753 [2024-11-19 14:24:08.187984] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:09.753 [2024-11-19 14:24:08.187991] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:23:09.753 [2024-11-19 14:24:08.187996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.753 [2024-11-19 14:24:08.188953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.753 [2024-11-19 14:24:08.188968] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:23:09.753 [2024-11-19 14:24:08.188975] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.943 ms 00:23:09.753 [2024-11-19 14:24:08.188983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.753 [2024-11-19 14:24:08.189009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.753 [2024-11-19 14:24:08.189015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:09.753 [2024-11-19 14:24:08.189023] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:09.753 [2024-11-19 14:24:08.189028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.753 [2024-11-19 14:24:08.189053] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:09.753 [2024-11-19 14:24:08.189060] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.753 [2024-11-19 14:24:08.189066] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:09.753 [2024-11-19 14:24:08.189072] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:09.753 [2024-11-19 14:24:08.189078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.753 [2024-11-19 14:24:08.207609] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.753 [2024-11-19 14:24:08.207638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:09.753 [2024-11-19 14:24:08.207646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.518 ms 00:23:09.753 [2024-11-19 14:24:08.207652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.753 [2024-11-19 14:24:08.207702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.753 [2024-11-19 14:24:08.207710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:09.753 [2024-11-19 14:24:08.207717] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:23:09.753 [2024-11-19 14:24:08.207722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.753 [2024-11-19 14:24:08.208416] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 205.003 ms, result 0 00:23:10.697  [2024-11-19T14:24:10.641Z] Copying: 20/1024 [MB] (20 MBps) [2024-11-19T14:24:11.585Z] Copying: 37/1024 [MB] (16 MBps) [2024-11-19T14:24:12.529Z] Copying: 50/1024 [MB] (13 MBps) [2024-11-19T14:24:13.473Z] Copying: 72/1024 [MB] (21 MBps) [2024-11-19T14:24:14.416Z] Copying: 91/1024 [MB] (19 MBps) [2024-11-19T14:24:15.361Z] Copying: 101/1024 [MB] (10 MBps) [2024-11-19T14:24:16.306Z] Copying: 112/1024 [MB] (10 MBps) [2024-11-19T14:24:17.251Z] Copying: 126/1024 [MB] (13 MBps) [2024-11-19T14:24:18.641Z] Copying: 136/1024 [MB] (10 MBps) [2024-11-19T14:24:19.585Z] Copying: 158/1024 [MB] (22 MBps) [2024-11-19T14:24:20.529Z] Copying: 174/1024 [MB] (15 MBps) [2024-11-19T14:24:21.475Z] Copying: 187/1024 [MB] (12 MBps) [2024-11-19T14:24:22.417Z] Copying: 218/1024 [MB] (31 MBps) [2024-11-19T14:24:23.361Z] Copying: 241/1024 [MB] (22 MBps) [2024-11-19T14:24:24.306Z] Copying: 270/1024 [MB] (29 MBps) [2024-11-19T14:24:25.249Z] Copying: 288/1024 [MB] (17 MBps) [2024-11-19T14:24:26.262Z] Copying: 315/1024 [MB] (26 MBps) [2024-11-19T14:24:27.654Z] Copying: 336/1024 [MB] (21 MBps) [2024-11-19T14:24:28.227Z] Copying: 348/1024 [MB] (11 MBps) [2024-11-19T14:24:29.613Z] Copying: 378/1024 [MB] (30 MBps) [2024-11-19T14:24:30.559Z] Copying: 401/1024 [MB] (22 MBps) [2024-11-19T14:24:31.505Z] Copying: 414/1024 [MB] (13 MBps) [2024-11-19T14:24:32.450Z] Copying: 425/1024 [MB] (11 MBps) [2024-11-19T14:24:33.394Z] Copying: 442/1024 [MB] (17 MBps) [2024-11-19T14:24:34.339Z] Copying: 461/1024 [MB] (18 MBps) [2024-11-19T14:24:35.284Z] Copying: 490/1024 [MB] (28 MBps) [2024-11-19T14:24:36.227Z] Copying: 521/1024 [MB] (30 MBps) [2024-11-19T14:24:37.614Z] Copying: 545/1024 [MB] (24 MBps) [2024-11-19T14:24:38.571Z] Copying: 562/1024 [MB] (16 MBps) [2024-11-19T14:24:39.512Z] Copying: 582/1024 [MB] (20 MBps) [2024-11-19T14:24:40.453Z] Copying: 601/1024 [MB] (19 MBps) [2024-11-19T14:24:41.394Z] Copying: 615/1024 [MB] (13 MBps) [2024-11-19T14:24:42.338Z] Copying: 632/1024 [MB] (16 MBps) [2024-11-19T14:24:43.284Z] Copying: 654/1024 [MB] (21 MBps) [2024-11-19T14:24:44.229Z] Copying: 671/1024 [MB] (17 MBps) [2024-11-19T14:24:45.616Z] Copying: 700/1024 [MB] (28 MBps) [2024-11-19T14:24:46.560Z] Copying: 724/1024 [MB] (24 MBps) [2024-11-19T14:24:47.504Z] Copying: 740/1024 [MB] (16 MBps) [2024-11-19T14:24:48.450Z] Copying: 755/1024 [MB] (14 MBps) [2024-11-19T14:24:49.394Z] Copying: 775/1024 [MB] (20 MBps) [2024-11-19T14:24:50.338Z] Copying: 795/1024 [MB] (20 MBps) [2024-11-19T14:24:51.281Z] Copying: 816/1024 [MB] (20 MBps) [2024-11-19T14:24:52.224Z] Copying: 829/1024 [MB] (13 MBps) [2024-11-19T14:24:53.613Z] Copying: 844/1024 [MB] (15 MBps) [2024-11-19T14:24:54.557Z] Copying: 873/1024 [MB] (28 MBps) [2024-11-19T14:24:55.501Z] Copying: 885/1024 [MB] (12 MBps) [2024-11-19T14:24:56.447Z] Copying: 900/1024 [MB] (14 MBps) [2024-11-19T14:24:57.390Z] Copying: 915/1024 [MB] (15 MBps) [2024-11-19T14:24:58.421Z] Copying: 936/1024 [MB] (20 MBps) [2024-11-19T14:24:59.366Z] Copying: 955/1024 [MB] (19 MBps) [2024-11-19T14:25:00.311Z] Copying: 974/1024 [MB] (19 MBps) [2024-11-19T14:25:01.255Z] Copying: 986/1024 [MB] (11 MBps) [2024-11-19T14:25:02.639Z] Copying: 999/1024 [MB] (12 MBps) [2024-11-19T14:25:03.583Z] Copying: 1015/1024 [MB] (16 MBps) [2024-11-19T14:25:03.845Z] Copying: 1048044/1048576 [kB] (8120 kBps) [2024-11-19T14:25:03.845Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-19 14:25:03.710448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.283 [2024-11-19 14:25:03.710534] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:05.283 [2024-11-19 14:25:03.710553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:05.283 [2024-11-19 14:25:03.710562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.283 [2024-11-19 14:25:03.711388] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:05.283 [2024-11-19 14:25:03.717093] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.283 [2024-11-19 14:25:03.717144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:05.283 [2024-11-19 14:25:03.717166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.671 ms 00:24:05.283 [2024-11-19 14:25:03.717182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.283 [2024-11-19 14:25:03.729104] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.283 [2024-11-19 14:25:03.729157] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:05.283 [2024-11-19 14:25:03.729170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.230 ms 00:24:05.283 [2024-11-19 14:25:03.729179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.283 [2024-11-19 14:25:03.752340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.283 [2024-11-19 14:25:03.752392] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:05.283 [2024-11-19 14:25:03.752404] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.141 ms 00:24:05.283 [2024-11-19 14:25:03.752412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.283 [2024-11-19 14:25:03.758568] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.283 [2024-11-19 14:25:03.758617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:24:05.283 [2024-11-19 14:25:03.758630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.107 ms 00:24:05.283 [2024-11-19 14:25:03.758638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.283 [2024-11-19 14:25:03.786243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.283 [2024-11-19 14:25:03.786472] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:05.283 [2024-11-19 14:25:03.786497] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.549 ms 00:24:05.283 [2024-11-19 14:25:03.786505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.283 [2024-11-19 14:25:03.804122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.283 [2024-11-19 14:25:03.804176] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:05.283 [2024-11-19 14:25:03.804191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.548 ms 00:24:05.283 [2024-11-19 14:25:03.804199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.544 [2024-11-19 14:25:04.001211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.545 [2024-11-19 14:25:04.001439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:05.545 [2024-11-19 14:25:04.001465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 196.953 ms 00:24:05.545 [2024-11-19 14:25:04.001476] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.545 [2024-11-19 14:25:04.028417] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.545 [2024-11-19 14:25:04.028609] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:05.545 [2024-11-19 14:25:04.028631] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.910 ms 00:24:05.545 [2024-11-19 14:25:04.028639] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.545 [2024-11-19 14:25:04.054190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.545 [2024-11-19 14:25:04.054242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:05.545 [2024-11-19 14:25:04.054255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.447 ms 00:24:05.545 [2024-11-19 14:25:04.054262] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.545 [2024-11-19 14:25:04.079731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.545 [2024-11-19 14:25:04.079931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:05.545 [2024-11-19 14:25:04.079954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.420 ms 00:24:05.545 [2024-11-19 14:25:04.079961] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.807 [2024-11-19 14:25:04.105217] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.807 [2024-11-19 14:25:04.105264] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:05.807 [2024-11-19 14:25:04.105277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.074 ms 00:24:05.807 [2024-11-19 14:25:04.105285] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.807 [2024-11-19 14:25:04.105328] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:05.807 [2024-11-19 14:25:04.105345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 94208 / 261120 wr_cnt: 1 state: open 00:24:05.807 [2024-11-19 14:25:04.105355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:05.807 [2024-11-19 14:25:04.105365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:05.807 [2024-11-19 14:25:04.105373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:05.807 [2024-11-19 14:25:04.105381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:05.807 [2024-11-19 14:25:04.105389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:05.807 [2024-11-19 14:25:04.105398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:05.807 [2024-11-19 14:25:04.105406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:05.807 [2024-11-19 14:25:04.105414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:05.807 [2024-11-19 14:25:04.105421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:05.807 [2024-11-19 14:25:04.105430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:05.807 [2024-11-19 14:25:04.105437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:05.807 [2024-11-19 14:25:04.105444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:05.807 [2024-11-19 14:25:04.105452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:05.807 [2024-11-19 14:25:04.105460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:05.807 [2024-11-19 14:25:04.105468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:05.807 [2024-11-19 14:25:04.105476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:05.807 [2024-11-19 14:25:04.105484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:05.807 [2024-11-19 14:25:04.105492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:05.807 [2024-11-19 14:25:04.105499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:05.807 [2024-11-19 14:25:04.105506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:05.807 [2024-11-19 14:25:04.105514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:05.807 [2024-11-19 14:25:04.105522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.105998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.106005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.106014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.106022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.106029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.106037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.106046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.106055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.106063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.106071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.106078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.106086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.106094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.106101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.106108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.106122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.106130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.106139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.106147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.106155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.106163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.106170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.106178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:05.808 [2024-11-19 14:25:04.106195] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:05.808 [2024-11-19 14:25:04.106205] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dda6fd23-b75b-45b2-8b0e-79979a296360 00:24:05.808 [2024-11-19 14:25:04.106214] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 94208 00:24:05.808 [2024-11-19 14:25:04.106222] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 95168 00:24:05.808 [2024-11-19 14:25:04.106231] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 94208 00:24:05.808 [2024-11-19 14:25:04.106247] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0102 00:24:05.808 [2024-11-19 14:25:04.106265] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:05.808 [2024-11-19 14:25:04.106273] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:05.808 [2024-11-19 14:25:04.106282] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:05.808 [2024-11-19 14:25:04.106289] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:05.808 [2024-11-19 14:25:04.106296] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:05.808 [2024-11-19 14:25:04.106303] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.808 [2024-11-19 14:25:04.106311] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:05.808 [2024-11-19 14:25:04.106320] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.976 ms 00:24:05.808 [2024-11-19 14:25:04.106327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.808 [2024-11-19 14:25:04.119932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.809 [2024-11-19 14:25:04.119976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:05.809 [2024-11-19 14:25:04.119987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.567 ms 00:24:05.809 [2024-11-19 14:25:04.119996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.809 [2024-11-19 14:25:04.120225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.809 [2024-11-19 14:25:04.120235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:05.809 [2024-11-19 14:25:04.120251] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:24:05.809 [2024-11-19 14:25:04.120260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.809 [2024-11-19 14:25:04.159145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.809 [2024-11-19 14:25:04.159332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:05.809 [2024-11-19 14:25:04.159353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.809 [2024-11-19 14:25:04.159362] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.809 [2024-11-19 14:25:04.159432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.809 [2024-11-19 14:25:04.159441] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:05.809 [2024-11-19 14:25:04.159456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.809 [2024-11-19 14:25:04.159464] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.809 [2024-11-19 14:25:04.159550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.809 [2024-11-19 14:25:04.159560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:05.809 [2024-11-19 14:25:04.159569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.809 [2024-11-19 14:25:04.159577] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.809 [2024-11-19 14:25:04.159593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.809 [2024-11-19 14:25:04.159601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:05.809 [2024-11-19 14:25:04.159609] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.809 [2024-11-19 14:25:04.159622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.809 [2024-11-19 14:25:04.241336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.809 [2024-11-19 14:25:04.241391] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:05.809 [2024-11-19 14:25:04.241404] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.809 [2024-11-19 14:25:04.241412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.809 [2024-11-19 14:25:04.274314] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.809 [2024-11-19 14:25:04.274494] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:05.809 [2024-11-19 14:25:04.274520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.809 [2024-11-19 14:25:04.274528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.809 [2024-11-19 14:25:04.274600] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.809 [2024-11-19 14:25:04.274610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:05.809 [2024-11-19 14:25:04.274618] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.809 [2024-11-19 14:25:04.274626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.809 [2024-11-19 14:25:04.274669] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.809 [2024-11-19 14:25:04.274680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:05.809 [2024-11-19 14:25:04.274688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.809 [2024-11-19 14:25:04.274696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.809 [2024-11-19 14:25:04.274809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.809 [2024-11-19 14:25:04.274821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:05.809 [2024-11-19 14:25:04.274829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.809 [2024-11-19 14:25:04.274837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.809 [2024-11-19 14:25:04.274868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.809 [2024-11-19 14:25:04.274907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:05.809 [2024-11-19 14:25:04.274916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.809 [2024-11-19 14:25:04.274926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.809 [2024-11-19 14:25:04.274973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.809 [2024-11-19 14:25:04.274983] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:05.809 [2024-11-19 14:25:04.274991] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.809 [2024-11-19 14:25:04.275000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.809 [2024-11-19 14:25:04.275050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.809 [2024-11-19 14:25:04.275061] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:05.809 [2024-11-19 14:25:04.275069] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.809 [2024-11-19 14:25:04.275078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.809 [2024-11-19 14:25:04.275212] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 565.365 ms, result 0 00:24:07.195 00:24:07.195 00:24:07.195 14:25:05 -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:09.736 14:25:07 -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:09.736 [2024-11-19 14:25:07.779393] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:09.736 [2024-11-19 14:25:07.779530] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77437 ] 00:24:09.736 [2024-11-19 14:25:07.931442] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.736 [2024-11-19 14:25:08.153893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.997 [2024-11-19 14:25:08.444739] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:09.997 [2024-11-19 14:25:08.444820] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:10.260 [2024-11-19 14:25:08.600603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.260 [2024-11-19 14:25:08.600669] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:10.260 [2024-11-19 14:25:08.600684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:10.260 [2024-11-19 14:25:08.600695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.260 [2024-11-19 14:25:08.600747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.260 [2024-11-19 14:25:08.600757] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:10.260 [2024-11-19 14:25:08.600766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:24:10.260 [2024-11-19 14:25:08.600775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.260 [2024-11-19 14:25:08.600796] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:10.260 [2024-11-19 14:25:08.601579] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:10.260 [2024-11-19 14:25:08.601606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.260 [2024-11-19 14:25:08.601614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:10.260 [2024-11-19 14:25:08.601623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.815 ms 00:24:10.260 [2024-11-19 14:25:08.601631] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.260 [2024-11-19 14:25:08.603303] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:10.260 [2024-11-19 14:25:08.617788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.260 [2024-11-19 14:25:08.617838] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:10.260 [2024-11-19 14:25:08.617852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.489 ms 00:24:10.260 [2024-11-19 14:25:08.617860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.260 [2024-11-19 14:25:08.617955] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.260 [2024-11-19 14:25:08.617966] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:10.260 [2024-11-19 14:25:08.617976] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:24:10.260 [2024-11-19 14:25:08.617985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.260 [2024-11-19 14:25:08.626250] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.260 [2024-11-19 14:25:08.626296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:10.260 [2024-11-19 14:25:08.626306] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.164 ms 00:24:10.260 [2024-11-19 14:25:08.626314] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.260 [2024-11-19 14:25:08.626411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.260 [2024-11-19 14:25:08.626421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:10.260 [2024-11-19 14:25:08.626429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:24:10.260 [2024-11-19 14:25:08.626439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.260 [2024-11-19 14:25:08.626482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.260 [2024-11-19 14:25:08.626491] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:10.260 [2024-11-19 14:25:08.626498] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:10.260 [2024-11-19 14:25:08.626505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.260 [2024-11-19 14:25:08.626536] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:10.260 [2024-11-19 14:25:08.630800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.260 [2024-11-19 14:25:08.630843] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:10.260 [2024-11-19 14:25:08.630853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.277 ms 00:24:10.260 [2024-11-19 14:25:08.630861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.260 [2024-11-19 14:25:08.630915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.260 [2024-11-19 14:25:08.630926] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:10.260 [2024-11-19 14:25:08.630937] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:10.260 [2024-11-19 14:25:08.630945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.260 [2024-11-19 14:25:08.630996] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:10.260 [2024-11-19 14:25:08.631019] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:24:10.260 [2024-11-19 14:25:08.631054] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:10.260 [2024-11-19 14:25:08.631070] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:24:10.260 [2024-11-19 14:25:08.631144] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:24:10.260 [2024-11-19 14:25:08.631158] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:10.260 [2024-11-19 14:25:08.631168] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:24:10.260 [2024-11-19 14:25:08.631178] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:10.260 [2024-11-19 14:25:08.631188] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:10.260 [2024-11-19 14:25:08.631196] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:10.260 [2024-11-19 14:25:08.631204] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:10.260 [2024-11-19 14:25:08.631211] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:24:10.260 [2024-11-19 14:25:08.631218] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:24:10.260 [2024-11-19 14:25:08.631226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.260 [2024-11-19 14:25:08.631234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:10.260 [2024-11-19 14:25:08.631242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:24:10.260 [2024-11-19 14:25:08.631252] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.260 [2024-11-19 14:25:08.631312] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.260 [2024-11-19 14:25:08.631326] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:10.261 [2024-11-19 14:25:08.631334] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:24:10.261 [2024-11-19 14:25:08.631342] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.261 [2024-11-19 14:25:08.631413] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:10.261 [2024-11-19 14:25:08.631422] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:10.261 [2024-11-19 14:25:08.631431] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:10.261 [2024-11-19 14:25:08.631439] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:10.261 [2024-11-19 14:25:08.631451] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:10.261 [2024-11-19 14:25:08.631457] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:10.261 [2024-11-19 14:25:08.631464] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:10.261 [2024-11-19 14:25:08.631471] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:10.261 [2024-11-19 14:25:08.631505] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:10.261 [2024-11-19 14:25:08.631513] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:10.261 [2024-11-19 14:25:08.631520] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:10.261 [2024-11-19 14:25:08.631528] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:10.261 [2024-11-19 14:25:08.631535] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:10.261 [2024-11-19 14:25:08.631543] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:10.261 [2024-11-19 14:25:08.631550] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:24:10.261 [2024-11-19 14:25:08.631557] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:10.261 [2024-11-19 14:25:08.631571] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:10.261 [2024-11-19 14:25:08.631579] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:24:10.261 [2024-11-19 14:25:08.631586] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:10.261 [2024-11-19 14:25:08.631593] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:24:10.261 [2024-11-19 14:25:08.631600] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:24:10.261 [2024-11-19 14:25:08.631607] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:24:10.261 [2024-11-19 14:25:08.631614] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:10.261 [2024-11-19 14:25:08.631621] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:10.261 [2024-11-19 14:25:08.631627] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:10.261 [2024-11-19 14:25:08.631633] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:10.261 [2024-11-19 14:25:08.631641] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:24:10.261 [2024-11-19 14:25:08.631648] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:10.261 [2024-11-19 14:25:08.631654] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:10.261 [2024-11-19 14:25:08.631660] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:10.261 [2024-11-19 14:25:08.631666] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:10.261 [2024-11-19 14:25:08.631672] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:10.261 [2024-11-19 14:25:08.631679] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:24:10.261 [2024-11-19 14:25:08.631687] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:10.261 [2024-11-19 14:25:08.631693] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:10.261 [2024-11-19 14:25:08.631701] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:10.261 [2024-11-19 14:25:08.631707] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:10.261 [2024-11-19 14:25:08.631714] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:10.261 [2024-11-19 14:25:08.631721] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:24:10.261 [2024-11-19 14:25:08.631727] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:10.261 [2024-11-19 14:25:08.631735] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:10.261 [2024-11-19 14:25:08.631743] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:10.261 [2024-11-19 14:25:08.631751] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:10.261 [2024-11-19 14:25:08.631759] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:10.261 [2024-11-19 14:25:08.631768] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:10.261 [2024-11-19 14:25:08.631775] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:10.261 [2024-11-19 14:25:08.631782] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:10.261 [2024-11-19 14:25:08.631789] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:10.261 [2024-11-19 14:25:08.631795] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:10.261 [2024-11-19 14:25:08.631802] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:10.261 [2024-11-19 14:25:08.631811] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:10.261 [2024-11-19 14:25:08.631821] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:10.261 [2024-11-19 14:25:08.631830] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:10.261 [2024-11-19 14:25:08.631837] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:24:10.261 [2024-11-19 14:25:08.631844] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:24:10.261 [2024-11-19 14:25:08.631852] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:24:10.261 [2024-11-19 14:25:08.631859] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:24:10.261 [2024-11-19 14:25:08.631866] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:24:10.261 [2024-11-19 14:25:08.631887] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:24:10.261 [2024-11-19 14:25:08.631895] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:24:10.261 [2024-11-19 14:25:08.631902] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:24:10.261 [2024-11-19 14:25:08.631909] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:24:10.261 [2024-11-19 14:25:08.631916] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:24:10.261 [2024-11-19 14:25:08.631924] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:24:10.261 [2024-11-19 14:25:08.631932] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:24:10.261 [2024-11-19 14:25:08.631939] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:10.261 [2024-11-19 14:25:08.631947] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:10.261 [2024-11-19 14:25:08.631956] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:10.261 [2024-11-19 14:25:08.631963] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:10.261 [2024-11-19 14:25:08.631970] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:10.261 [2024-11-19 14:25:08.631977] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:10.261 [2024-11-19 14:25:08.631998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.261 [2024-11-19 14:25:08.632008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:10.261 [2024-11-19 14:25:08.632016] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.629 ms 00:24:10.261 [2024-11-19 14:25:08.632027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.261 [2024-11-19 14:25:08.650156] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.261 [2024-11-19 14:25:08.650205] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:10.261 [2024-11-19 14:25:08.650218] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.079 ms 00:24:10.261 [2024-11-19 14:25:08.650233] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.261 [2024-11-19 14:25:08.650325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.261 [2024-11-19 14:25:08.650333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:10.261 [2024-11-19 14:25:08.650342] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:24:10.261 [2024-11-19 14:25:08.650349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.261 [2024-11-19 14:25:08.696604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.261 [2024-11-19 14:25:08.696657] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:10.261 [2024-11-19 14:25:08.696670] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.200 ms 00:24:10.261 [2024-11-19 14:25:08.696679] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.261 [2024-11-19 14:25:08.696731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.261 [2024-11-19 14:25:08.696740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:10.261 [2024-11-19 14:25:08.696749] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:10.261 [2024-11-19 14:25:08.696757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.261 [2024-11-19 14:25:08.697351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.261 [2024-11-19 14:25:08.697387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:10.261 [2024-11-19 14:25:08.697403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:24:10.261 [2024-11-19 14:25:08.697412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.261 [2024-11-19 14:25:08.697541] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.261 [2024-11-19 14:25:08.697551] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:10.261 [2024-11-19 14:25:08.697560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:24:10.262 [2024-11-19 14:25:08.697567] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.262 [2024-11-19 14:25:08.714169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.262 [2024-11-19 14:25:08.714213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:10.262 [2024-11-19 14:25:08.714224] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.577 ms 00:24:10.262 [2024-11-19 14:25:08.714232] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.262 [2024-11-19 14:25:08.728378] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:10.262 [2024-11-19 14:25:08.728570] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:10.262 [2024-11-19 14:25:08.728591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.262 [2024-11-19 14:25:08.728600] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:10.262 [2024-11-19 14:25:08.728610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.248 ms 00:24:10.262 [2024-11-19 14:25:08.728616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.262 [2024-11-19 14:25:08.755172] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.262 [2024-11-19 14:25:08.755348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:10.262 [2024-11-19 14:25:08.755370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.509 ms 00:24:10.262 [2024-11-19 14:25:08.755378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.262 [2024-11-19 14:25:08.768685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.262 [2024-11-19 14:25:08.768732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:10.262 [2024-11-19 14:25:08.768744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.251 ms 00:24:10.262 [2024-11-19 14:25:08.768751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.262 [2024-11-19 14:25:08.781606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.262 [2024-11-19 14:25:08.781652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:10.262 [2024-11-19 14:25:08.781675] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.807 ms 00:24:10.262 [2024-11-19 14:25:08.781682] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.262 [2024-11-19 14:25:08.782116] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.262 [2024-11-19 14:25:08.782131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:10.262 [2024-11-19 14:25:08.782140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:24:10.262 [2024-11-19 14:25:08.782148] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.523 [2024-11-19 14:25:08.849482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.523 [2024-11-19 14:25:08.849696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:10.523 [2024-11-19 14:25:08.849720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.315 ms 00:24:10.523 [2024-11-19 14:25:08.849730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.523 [2024-11-19 14:25:08.861117] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:10.523 [2024-11-19 14:25:08.864221] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.523 [2024-11-19 14:25:08.864384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:10.523 [2024-11-19 14:25:08.864412] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.444 ms 00:24:10.523 [2024-11-19 14:25:08.864420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.523 [2024-11-19 14:25:08.864496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.523 [2024-11-19 14:25:08.864507] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:10.523 [2024-11-19 14:25:08.864516] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:10.523 [2024-11-19 14:25:08.864524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.523 [2024-11-19 14:25:08.865907] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.523 [2024-11-19 14:25:08.865950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:10.523 [2024-11-19 14:25:08.865962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.346 ms 00:24:10.523 [2024-11-19 14:25:08.865977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.523 [2024-11-19 14:25:08.867318] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.523 [2024-11-19 14:25:08.867359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:24:10.523 [2024-11-19 14:25:08.867370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.316 ms 00:24:10.523 [2024-11-19 14:25:08.867378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.523 [2024-11-19 14:25:08.867414] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.523 [2024-11-19 14:25:08.867430] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:10.523 [2024-11-19 14:25:08.867438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:10.523 [2024-11-19 14:25:08.867446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.523 [2024-11-19 14:25:08.867496] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:10.523 [2024-11-19 14:25:08.867511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.523 [2024-11-19 14:25:08.867519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:10.523 [2024-11-19 14:25:08.867527] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:10.523 [2024-11-19 14:25:08.867535] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.523 [2024-11-19 14:25:08.893924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.523 [2024-11-19 14:25:08.894093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:10.523 [2024-11-19 14:25:08.894155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.369 ms 00:24:10.523 [2024-11-19 14:25:08.894188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.523 [2024-11-19 14:25:08.894845] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.523 [2024-11-19 14:25:08.894967] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:10.523 [2024-11-19 14:25:08.895162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:24:10.523 [2024-11-19 14:25:08.895206] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.523 [2024-11-19 14:25:08.901729] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 298.088 ms, result 0 00:24:11.913  [2024-11-19T14:25:11.419Z] Copying: 1224/1048576 [kB] (1224 kBps) [2024-11-19T14:25:12.363Z] Copying: 4252/1048576 [kB] (3028 kBps) [2024-11-19T14:25:13.309Z] Copying: 14/1024 [MB] (10 MBps) [2024-11-19T14:25:14.253Z] Copying: 31/1024 [MB] (17 MBps) [2024-11-19T14:25:15.198Z] Copying: 56/1024 [MB] (24 MBps) [2024-11-19T14:25:16.143Z] Copying: 83/1024 [MB] (26 MBps) [2024-11-19T14:25:17.089Z] Copying: 126/1024 [MB] (43 MBps) [2024-11-19T14:25:18.477Z] Copying: 154/1024 [MB] (28 MBps) [2024-11-19T14:25:19.420Z] Copying: 189/1024 [MB] (35 MBps) [2024-11-19T14:25:20.365Z] Copying: 207/1024 [MB] (18 MBps) [2024-11-19T14:25:21.307Z] Copying: 226/1024 [MB] (18 MBps) [2024-11-19T14:25:22.258Z] Copying: 264/1024 [MB] (38 MBps) [2024-11-19T14:25:23.201Z] Copying: 303/1024 [MB] (38 MBps) [2024-11-19T14:25:24.145Z] Copying: 330/1024 [MB] (27 MBps) [2024-11-19T14:25:25.091Z] Copying: 360/1024 [MB] (30 MBps) [2024-11-19T14:25:26.479Z] Copying: 391/1024 [MB] (30 MBps) [2024-11-19T14:25:27.422Z] Copying: 434/1024 [MB] (42 MBps) [2024-11-19T14:25:28.366Z] Copying: 477/1024 [MB] (42 MBps) [2024-11-19T14:25:29.320Z] Copying: 496/1024 [MB] (18 MBps) [2024-11-19T14:25:30.360Z] Copying: 524/1024 [MB] (28 MBps) [2024-11-19T14:25:31.307Z] Copying: 553/1024 [MB] (28 MBps) [2024-11-19T14:25:32.252Z] Copying: 580/1024 [MB] (27 MBps) [2024-11-19T14:25:33.198Z] Copying: 602/1024 [MB] (22 MBps) [2024-11-19T14:25:34.143Z] Copying: 624/1024 [MB] (22 MBps) [2024-11-19T14:25:35.087Z] Copying: 662/1024 [MB] (37 MBps) [2024-11-19T14:25:36.477Z] Copying: 690/1024 [MB] (28 MBps) [2024-11-19T14:25:37.422Z] Copying: 708/1024 [MB] (18 MBps) [2024-11-19T14:25:38.367Z] Copying: 734/1024 [MB] (25 MBps) [2024-11-19T14:25:39.312Z] Copying: 755/1024 [MB] (20 MBps) [2024-11-19T14:25:40.255Z] Copying: 785/1024 [MB] (29 MBps) [2024-11-19T14:25:41.200Z] Copying: 829/1024 [MB] (44 MBps) [2024-11-19T14:25:42.144Z] Copying: 854/1024 [MB] (25 MBps) [2024-11-19T14:25:43.087Z] Copying: 880/1024 [MB] (25 MBps) [2024-11-19T14:25:44.477Z] Copying: 909/1024 [MB] (29 MBps) [2024-11-19T14:25:45.421Z] Copying: 934/1024 [MB] (24 MBps) [2024-11-19T14:25:46.366Z] Copying: 965/1024 [MB] (31 MBps) [2024-11-19T14:25:46.366Z] Copying: 1012/1024 [MB] (46 MBps) [2024-11-19T14:25:48.286Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-11-19 14:25:47.837575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.724 [2024-11-19 14:25:47.837660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:49.724 [2024-11-19 14:25:47.837683] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:49.724 [2024-11-19 14:25:47.837695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.724 [2024-11-19 14:25:47.837728] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:49.724 [2024-11-19 14:25:47.841797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.724 [2024-11-19 14:25:47.841961] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:49.724 [2024-11-19 14:25:47.841977] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.047 ms 00:24:49.724 [2024-11-19 14:25:47.841997] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.724 [2024-11-19 14:25:47.842347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.724 [2024-11-19 14:25:47.842362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:49.724 [2024-11-19 14:25:47.842375] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:24:49.724 [2024-11-19 14:25:47.842386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.724 [2024-11-19 14:25:47.857271] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.724 [2024-11-19 14:25:47.857320] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:49.724 [2024-11-19 14:25:47.857332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.861 ms 00:24:49.724 [2024-11-19 14:25:47.857341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.724 [2024-11-19 14:25:47.863542] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.724 [2024-11-19 14:25:47.863586] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:24:49.724 [2024-11-19 14:25:47.863600] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.153 ms 00:24:49.724 [2024-11-19 14:25:47.863609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.724 [2024-11-19 14:25:47.890485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.724 [2024-11-19 14:25:47.890532] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:49.724 [2024-11-19 14:25:47.890544] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.808 ms 00:24:49.724 [2024-11-19 14:25:47.890552] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.724 [2024-11-19 14:25:47.907856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.724 [2024-11-19 14:25:47.908045] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:49.724 [2024-11-19 14:25:47.908502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.256 ms 00:24:49.724 [2024-11-19 14:25:47.908596] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.724 [2024-11-19 14:25:47.917723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.725 [2024-11-19 14:25:47.917893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:49.725 [2024-11-19 14:25:47.918095] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.008 ms 00:24:49.725 [2024-11-19 14:25:47.918173] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.725 [2024-11-19 14:25:47.944453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.725 [2024-11-19 14:25:47.944630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:49.725 [2024-11-19 14:25:47.944704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.229 ms 00:24:49.725 [2024-11-19 14:25:47.944728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.725 [2024-11-19 14:25:47.970441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.725 [2024-11-19 14:25:47.970596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:49.725 [2024-11-19 14:25:47.970615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.590 ms 00:24:49.725 [2024-11-19 14:25:47.970634] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.725 [2024-11-19 14:25:47.995662] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.725 [2024-11-19 14:25:47.995709] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:49.725 [2024-11-19 14:25:47.995720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.992 ms 00:24:49.725 [2024-11-19 14:25:47.995727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.725 [2024-11-19 14:25:48.020550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.725 [2024-11-19 14:25:48.020593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:49.725 [2024-11-19 14:25:48.020605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.738 ms 00:24:49.725 [2024-11-19 14:25:48.020612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.725 [2024-11-19 14:25:48.020654] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:49.725 [2024-11-19 14:25:48.020670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:49.725 [2024-11-19 14:25:48.020681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3328 / 261120 wr_cnt: 1 state: open 00:24:49.725 [2024-11-19 14:25:48.020690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.020699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.020706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.020714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.020722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.020730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.020738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.020746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.020755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.020762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.020770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.020778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.020786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.020794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.020801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.020808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.020815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.020823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.020830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.020838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.020845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.020852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.020859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.020867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.020904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.020912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.020920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.020930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.020938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.020946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.020955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.020963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.020971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.020978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.020987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.020994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.021002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.021010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.021018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.021026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.021034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.021042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.021049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.021057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.021065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.021072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.021079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.021087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.021094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.021101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.021108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.021116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.021123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.021131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.021140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.021147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.021155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.021162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.021170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.021179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.021200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.021208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.021216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.021224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.021232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.021240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.021247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.021255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.021263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:49.725 [2024-11-19 14:25:48.021271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:49.726 [2024-11-19 14:25:48.021279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:49.726 [2024-11-19 14:25:48.021287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:49.726 [2024-11-19 14:25:48.021295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:49.726 [2024-11-19 14:25:48.021303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:49.726 [2024-11-19 14:25:48.021312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:49.726 [2024-11-19 14:25:48.021319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:49.726 [2024-11-19 14:25:48.021328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:49.726 [2024-11-19 14:25:48.021336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:49.726 [2024-11-19 14:25:48.021343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:49.726 [2024-11-19 14:25:48.021351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:49.726 [2024-11-19 14:25:48.021359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:49.726 [2024-11-19 14:25:48.021367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:49.726 [2024-11-19 14:25:48.021375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:49.726 [2024-11-19 14:25:48.021382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:49.726 [2024-11-19 14:25:48.021390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:49.726 [2024-11-19 14:25:48.021397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:49.726 [2024-11-19 14:25:48.021405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:49.726 [2024-11-19 14:25:48.021412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:49.726 [2024-11-19 14:25:48.021420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:49.726 [2024-11-19 14:25:48.021428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:49.726 [2024-11-19 14:25:48.021436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:49.726 [2024-11-19 14:25:48.021445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:49.726 [2024-11-19 14:25:48.021452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:49.726 [2024-11-19 14:25:48.021460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:49.726 [2024-11-19 14:25:48.021467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:49.726 [2024-11-19 14:25:48.021475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:49.726 [2024-11-19 14:25:48.021482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:49.726 [2024-11-19 14:25:48.021490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:49.726 [2024-11-19 14:25:48.021506] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:49.726 [2024-11-19 14:25:48.021514] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dda6fd23-b75b-45b2-8b0e-79979a296360 00:24:49.726 [2024-11-19 14:25:48.021528] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264448 00:24:49.726 [2024-11-19 14:25:48.021536] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 172224 00:24:49.726 [2024-11-19 14:25:48.021543] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 170240 00:24:49.726 [2024-11-19 14:25:48.021552] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0117 00:24:49.726 [2024-11-19 14:25:48.021560] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:49.726 [2024-11-19 14:25:48.021569] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:49.726 [2024-11-19 14:25:48.021577] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:49.726 [2024-11-19 14:25:48.021584] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:49.726 [2024-11-19 14:25:48.021598] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:49.726 [2024-11-19 14:25:48.021606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.726 [2024-11-19 14:25:48.021613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:49.726 [2024-11-19 14:25:48.021622] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.952 ms 00:24:49.726 [2024-11-19 14:25:48.021629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.726 [2024-11-19 14:25:48.034873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.726 [2024-11-19 14:25:48.034923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:49.726 [2024-11-19 14:25:48.034935] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.207 ms 00:24:49.726 [2024-11-19 14:25:48.034943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.726 [2024-11-19 14:25:48.035170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.726 [2024-11-19 14:25:48.035180] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:49.726 [2024-11-19 14:25:48.035189] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:24:49.726 [2024-11-19 14:25:48.035202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.726 [2024-11-19 14:25:48.074196] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.726 [2024-11-19 14:25:48.074372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:49.726 [2024-11-19 14:25:48.074393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.726 [2024-11-19 14:25:48.074401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.726 [2024-11-19 14:25:48.074461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.726 [2024-11-19 14:25:48.074470] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:49.726 [2024-11-19 14:25:48.074478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.726 [2024-11-19 14:25:48.074493] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.726 [2024-11-19 14:25:48.074571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.726 [2024-11-19 14:25:48.074582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:49.726 [2024-11-19 14:25:48.074590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.726 [2024-11-19 14:25:48.074598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.726 [2024-11-19 14:25:48.074614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.726 [2024-11-19 14:25:48.074623] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:49.726 [2024-11-19 14:25:48.074631] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.726 [2024-11-19 14:25:48.074639] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.726 [2024-11-19 14:25:48.155098] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.726 [2024-11-19 14:25:48.155163] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:49.726 [2024-11-19 14:25:48.155174] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.726 [2024-11-19 14:25:48.155191] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.726 [2024-11-19 14:25:48.187603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.726 [2024-11-19 14:25:48.187647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:49.726 [2024-11-19 14:25:48.187659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.726 [2024-11-19 14:25:48.187674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.726 [2024-11-19 14:25:48.187738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.726 [2024-11-19 14:25:48.187747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:49.726 [2024-11-19 14:25:48.187756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.726 [2024-11-19 14:25:48.187765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.726 [2024-11-19 14:25:48.187804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.726 [2024-11-19 14:25:48.187814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:49.726 [2024-11-19 14:25:48.187822] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.726 [2024-11-19 14:25:48.187831] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.726 [2024-11-19 14:25:48.187963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.726 [2024-11-19 14:25:48.187975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:49.726 [2024-11-19 14:25:48.187983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.726 [2024-11-19 14:25:48.187992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.726 [2024-11-19 14:25:48.188028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.726 [2024-11-19 14:25:48.188038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:49.726 [2024-11-19 14:25:48.188046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.726 [2024-11-19 14:25:48.188055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.726 [2024-11-19 14:25:48.188099] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.726 [2024-11-19 14:25:48.188109] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:49.726 [2024-11-19 14:25:48.188117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.726 [2024-11-19 14:25:48.188125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.726 [2024-11-19 14:25:48.188173] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.726 [2024-11-19 14:25:48.188183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:49.726 [2024-11-19 14:25:48.188192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.726 [2024-11-19 14:25:48.188201] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.726 [2024-11-19 14:25:48.188338] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 350.736 ms, result 0 00:24:50.672 00:24:50.672 00:24:50.672 14:25:49 -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:53.215 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:53.215 14:25:51 -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:53.215 [2024-11-19 14:25:51.416333] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:53.215 [2024-11-19 14:25:51.416462] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77890 ] 00:24:53.215 [2024-11-19 14:25:51.571748] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.477 [2024-11-19 14:25:51.788347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.738 [2024-11-19 14:25:52.075398] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:53.738 [2024-11-19 14:25:52.075478] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:53.738 [2024-11-19 14:25:52.231534] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.738 [2024-11-19 14:25:52.231591] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:53.738 [2024-11-19 14:25:52.231606] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:53.738 [2024-11-19 14:25:52.231617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.738 [2024-11-19 14:25:52.231665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.738 [2024-11-19 14:25:52.231675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:53.738 [2024-11-19 14:25:52.231684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:24:53.738 [2024-11-19 14:25:52.231693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.738 [2024-11-19 14:25:52.231713] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:53.738 [2024-11-19 14:25:52.232486] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:53.738 [2024-11-19 14:25:52.232504] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.738 [2024-11-19 14:25:52.232512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:53.738 [2024-11-19 14:25:52.232521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.796 ms 00:24:53.738 [2024-11-19 14:25:52.232529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.738 [2024-11-19 14:25:52.234287] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:53.739 [2024-11-19 14:25:52.249053] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.739 [2024-11-19 14:25:52.249099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:53.739 [2024-11-19 14:25:52.249112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.768 ms 00:24:53.739 [2024-11-19 14:25:52.249120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.739 [2024-11-19 14:25:52.249193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.739 [2024-11-19 14:25:52.249202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:53.739 [2024-11-19 14:25:52.249219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:24:53.739 [2024-11-19 14:25:52.249227] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.739 [2024-11-19 14:25:52.257141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.739 [2024-11-19 14:25:52.257184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:53.739 [2024-11-19 14:25:52.257194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.817 ms 00:24:53.739 [2024-11-19 14:25:52.257202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.739 [2024-11-19 14:25:52.257296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.739 [2024-11-19 14:25:52.257306] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:53.739 [2024-11-19 14:25:52.257315] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:24:53.739 [2024-11-19 14:25:52.257323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.739 [2024-11-19 14:25:52.257366] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.739 [2024-11-19 14:25:52.257376] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:53.739 [2024-11-19 14:25:52.257384] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:53.739 [2024-11-19 14:25:52.257391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.739 [2024-11-19 14:25:52.257422] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:53.739 [2024-11-19 14:25:52.261634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.739 [2024-11-19 14:25:52.261670] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:53.739 [2024-11-19 14:25:52.261680] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.225 ms 00:24:53.739 [2024-11-19 14:25:52.261688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.739 [2024-11-19 14:25:52.261724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.739 [2024-11-19 14:25:52.261732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:53.739 [2024-11-19 14:25:52.261741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:53.739 [2024-11-19 14:25:52.261751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.739 [2024-11-19 14:25:52.261801] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:53.739 [2024-11-19 14:25:52.261822] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:24:53.739 [2024-11-19 14:25:52.261858] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:53.739 [2024-11-19 14:25:52.261893] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:24:53.739 [2024-11-19 14:25:52.261970] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:24:53.739 [2024-11-19 14:25:52.261980] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:53.739 [2024-11-19 14:25:52.261993] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:24:53.739 [2024-11-19 14:25:52.262007] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:53.739 [2024-11-19 14:25:52.262016] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:53.739 [2024-11-19 14:25:52.262024] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:53.739 [2024-11-19 14:25:52.262032] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:53.739 [2024-11-19 14:25:52.262039] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:24:53.739 [2024-11-19 14:25:52.262046] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:24:53.739 [2024-11-19 14:25:52.262055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.739 [2024-11-19 14:25:52.262062] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:53.739 [2024-11-19 14:25:52.262070] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:24:53.739 [2024-11-19 14:25:52.262077] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.739 [2024-11-19 14:25:52.262140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.739 [2024-11-19 14:25:52.262148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:53.739 [2024-11-19 14:25:52.262156] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:24:53.739 [2024-11-19 14:25:52.262163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.739 [2024-11-19 14:25:52.262233] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:53.739 [2024-11-19 14:25:52.262243] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:53.739 [2024-11-19 14:25:52.262251] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:53.739 [2024-11-19 14:25:52.262259] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:53.739 [2024-11-19 14:25:52.262267] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:53.739 [2024-11-19 14:25:52.262273] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:53.739 [2024-11-19 14:25:52.262280] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:53.739 [2024-11-19 14:25:52.262288] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:53.739 [2024-11-19 14:25:52.262295] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:53.739 [2024-11-19 14:25:52.262303] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:53.739 [2024-11-19 14:25:52.262310] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:53.739 [2024-11-19 14:25:52.262317] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:53.739 [2024-11-19 14:25:52.262323] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:53.739 [2024-11-19 14:25:52.262330] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:53.739 [2024-11-19 14:25:52.262337] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:24:53.739 [2024-11-19 14:25:52.262343] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:53.739 [2024-11-19 14:25:52.262358] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:53.739 [2024-11-19 14:25:52.262365] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:24:53.739 [2024-11-19 14:25:52.262371] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:53.739 [2024-11-19 14:25:52.262378] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:24:53.739 [2024-11-19 14:25:52.262385] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:24:53.739 [2024-11-19 14:25:52.262391] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:24:53.739 [2024-11-19 14:25:52.262398] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:53.739 [2024-11-19 14:25:52.262404] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:53.739 [2024-11-19 14:25:52.262410] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:53.739 [2024-11-19 14:25:52.262417] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:53.739 [2024-11-19 14:25:52.262424] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:24:53.739 [2024-11-19 14:25:52.262431] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:53.739 [2024-11-19 14:25:52.262438] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:53.739 [2024-11-19 14:25:52.262446] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:53.739 [2024-11-19 14:25:52.262452] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:53.739 [2024-11-19 14:25:52.262458] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:53.739 [2024-11-19 14:25:52.262465] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:24:53.739 [2024-11-19 14:25:52.262471] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:53.739 [2024-11-19 14:25:52.262477] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:53.739 [2024-11-19 14:25:52.262484] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:53.739 [2024-11-19 14:25:52.262490] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:53.739 [2024-11-19 14:25:52.262496] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:53.739 [2024-11-19 14:25:52.262504] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:24:53.739 [2024-11-19 14:25:52.262510] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:53.739 [2024-11-19 14:25:52.262516] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:53.739 [2024-11-19 14:25:52.262528] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:53.739 [2024-11-19 14:25:52.262536] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:53.739 [2024-11-19 14:25:52.262543] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:53.739 [2024-11-19 14:25:52.262550] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:53.739 [2024-11-19 14:25:52.262557] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:53.739 [2024-11-19 14:25:52.262564] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:53.739 [2024-11-19 14:25:52.262570] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:53.739 [2024-11-19 14:25:52.262576] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:53.739 [2024-11-19 14:25:52.262584] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:53.739 [2024-11-19 14:25:52.262592] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:53.739 [2024-11-19 14:25:52.262601] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:53.739 [2024-11-19 14:25:52.262610] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:53.739 [2024-11-19 14:25:52.262617] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:24:53.740 [2024-11-19 14:25:52.262623] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:24:53.740 [2024-11-19 14:25:52.262631] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:24:53.740 [2024-11-19 14:25:52.262638] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:24:53.740 [2024-11-19 14:25:52.262645] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:24:53.740 [2024-11-19 14:25:52.262652] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:24:53.740 [2024-11-19 14:25:52.262659] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:24:53.740 [2024-11-19 14:25:52.262666] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:24:53.740 [2024-11-19 14:25:52.262674] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:24:53.740 [2024-11-19 14:25:52.262680] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:24:53.740 [2024-11-19 14:25:52.262688] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:24:53.740 [2024-11-19 14:25:52.262695] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:24:53.740 [2024-11-19 14:25:52.262702] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:53.740 [2024-11-19 14:25:52.262710] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:53.740 [2024-11-19 14:25:52.262718] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:53.740 [2024-11-19 14:25:52.262724] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:53.740 [2024-11-19 14:25:52.262731] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:53.740 [2024-11-19 14:25:52.262739] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:53.740 [2024-11-19 14:25:52.262747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.740 [2024-11-19 14:25:52.262755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:53.740 [2024-11-19 14:25:52.262763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:24:53.740 [2024-11-19 14:25:52.262770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.740 [2024-11-19 14:25:52.280685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.740 [2024-11-19 14:25:52.280869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:53.740 [2024-11-19 14:25:52.280903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.873 ms 00:24:53.740 [2024-11-19 14:25:52.280919] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.740 [2024-11-19 14:25:52.281012] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.740 [2024-11-19 14:25:52.281021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:53.740 [2024-11-19 14:25:52.281030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:24:53.740 [2024-11-19 14:25:52.281038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.002 [2024-11-19 14:25:52.328252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.002 [2024-11-19 14:25:52.328439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:54.002 [2024-11-19 14:25:52.328460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.161 ms 00:24:54.002 [2024-11-19 14:25:52.328469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.002 [2024-11-19 14:25:52.328520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.002 [2024-11-19 14:25:52.328530] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:54.002 [2024-11-19 14:25:52.328540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:54.002 [2024-11-19 14:25:52.328548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.002 [2024-11-19 14:25:52.329154] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.002 [2024-11-19 14:25:52.329186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:54.002 [2024-11-19 14:25:52.329202] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:24:54.002 [2024-11-19 14:25:52.329211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.002 [2024-11-19 14:25:52.329335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.002 [2024-11-19 14:25:52.329345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:54.002 [2024-11-19 14:25:52.329353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:24:54.002 [2024-11-19 14:25:52.329360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.002 [2024-11-19 14:25:52.345752] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.002 [2024-11-19 14:25:52.345798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:54.002 [2024-11-19 14:25:52.345809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.368 ms 00:24:54.002 [2024-11-19 14:25:52.345817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.002 [2024-11-19 14:25:52.360033] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:54.002 [2024-11-19 14:25:52.360079] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:54.002 [2024-11-19 14:25:52.360092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.002 [2024-11-19 14:25:52.360101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:54.002 [2024-11-19 14:25:52.360111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.129 ms 00:24:54.002 [2024-11-19 14:25:52.360118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.002 [2024-11-19 14:25:52.386318] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.002 [2024-11-19 14:25:52.386365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:54.002 [2024-11-19 14:25:52.386377] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.148 ms 00:24:54.002 [2024-11-19 14:25:52.386385] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.002 [2024-11-19 14:25:52.399535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.002 [2024-11-19 14:25:52.399581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:54.002 [2024-11-19 14:25:52.399592] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.093 ms 00:24:54.002 [2024-11-19 14:25:52.399600] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.002 [2024-11-19 14:25:52.412291] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.002 [2024-11-19 14:25:52.412344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:54.002 [2024-11-19 14:25:52.412355] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.646 ms 00:24:54.002 [2024-11-19 14:25:52.412362] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.002 [2024-11-19 14:25:52.412744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.002 [2024-11-19 14:25:52.412758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:54.002 [2024-11-19 14:25:52.412767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:24:54.002 [2024-11-19 14:25:52.412774] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.002 [2024-11-19 14:25:52.480223] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.002 [2024-11-19 14:25:52.480280] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:54.002 [2024-11-19 14:25:52.480296] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.430 ms 00:24:54.002 [2024-11-19 14:25:52.480305] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.002 [2024-11-19 14:25:52.491835] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:54.002 [2024-11-19 14:25:52.495151] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.002 [2024-11-19 14:25:52.495193] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:54.002 [2024-11-19 14:25:52.495212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.786 ms 00:24:54.002 [2024-11-19 14:25:52.495220] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.002 [2024-11-19 14:25:52.495294] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.002 [2024-11-19 14:25:52.495310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:54.002 [2024-11-19 14:25:52.495319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:54.002 [2024-11-19 14:25:52.495326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.002 [2024-11-19 14:25:52.496236] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.002 [2024-11-19 14:25:52.496273] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:54.002 [2024-11-19 14:25:52.496285] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.872 ms 00:24:54.002 [2024-11-19 14:25:52.496301] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.002 [2024-11-19 14:25:52.497659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.002 [2024-11-19 14:25:52.497704] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:24:54.002 [2024-11-19 14:25:52.497716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.334 ms 00:24:54.003 [2024-11-19 14:25:52.497724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.003 [2024-11-19 14:25:52.497759] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.003 [2024-11-19 14:25:52.497772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:54.003 [2024-11-19 14:25:52.497780] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:54.003 [2024-11-19 14:25:52.497788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.003 [2024-11-19 14:25:52.497823] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:54.003 [2024-11-19 14:25:52.497837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.003 [2024-11-19 14:25:52.497845] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:54.003 [2024-11-19 14:25:52.497853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:54.003 [2024-11-19 14:25:52.497861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.003 [2024-11-19 14:25:52.524158] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.003 [2024-11-19 14:25:52.524220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:54.003 [2024-11-19 14:25:52.524233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.255 ms 00:24:54.003 [2024-11-19 14:25:52.524247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.003 [2024-11-19 14:25:52.524329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.003 [2024-11-19 14:25:52.524339] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:54.003 [2024-11-19 14:25:52.524349] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:24:54.003 [2024-11-19 14:25:52.524357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.003 [2024-11-19 14:25:52.525578] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 293.582 ms, result 0 00:24:55.390  [2024-11-19T14:25:54.896Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-19T14:25:55.839Z] Copying: 34/1024 [MB] (16 MBps) [2024-11-19T14:25:56.783Z] Copying: 56/1024 [MB] (21 MBps) [2024-11-19T14:25:57.728Z] Copying: 71/1024 [MB] (14 MBps) [2024-11-19T14:25:59.118Z] Copying: 86/1024 [MB] (15 MBps) [2024-11-19T14:25:59.704Z] Copying: 105/1024 [MB] (18 MBps) [2024-11-19T14:26:01.091Z] Copying: 124/1024 [MB] (18 MBps) [2024-11-19T14:26:02.038Z] Copying: 138/1024 [MB] (14 MBps) [2024-11-19T14:26:03.048Z] Copying: 149/1024 [MB] (10 MBps) [2024-11-19T14:26:03.993Z] Copying: 165/1024 [MB] (16 MBps) [2024-11-19T14:26:04.939Z] Copying: 179/1024 [MB] (14 MBps) [2024-11-19T14:26:05.903Z] Copying: 192/1024 [MB] (12 MBps) [2024-11-19T14:26:06.848Z] Copying: 203/1024 [MB] (11 MBps) [2024-11-19T14:26:07.795Z] Copying: 214/1024 [MB] (11 MBps) [2024-11-19T14:26:08.739Z] Copying: 227/1024 [MB] (12 MBps) [2024-11-19T14:26:10.127Z] Copying: 238/1024 [MB] (11 MBps) [2024-11-19T14:26:11.072Z] Copying: 249/1024 [MB] (11 MBps) [2024-11-19T14:26:12.017Z] Copying: 277/1024 [MB] (27 MBps) [2024-11-19T14:26:12.961Z] Copying: 296/1024 [MB] (19 MBps) [2024-11-19T14:26:13.909Z] Copying: 312/1024 [MB] (15 MBps) [2024-11-19T14:26:14.853Z] Copying: 331/1024 [MB] (18 MBps) [2024-11-19T14:26:15.799Z] Copying: 350/1024 [MB] (19 MBps) [2024-11-19T14:26:16.745Z] Copying: 363/1024 [MB] (12 MBps) [2024-11-19T14:26:18.133Z] Copying: 374/1024 [MB] (10 MBps) [2024-11-19T14:26:18.708Z] Copying: 388/1024 [MB] (14 MBps) [2024-11-19T14:26:20.095Z] Copying: 400/1024 [MB] (11 MBps) [2024-11-19T14:26:21.039Z] Copying: 426/1024 [MB] (26 MBps) [2024-11-19T14:26:21.984Z] Copying: 445/1024 [MB] (19 MBps) [2024-11-19T14:26:22.932Z] Copying: 468/1024 [MB] (22 MBps) [2024-11-19T14:26:23.880Z] Copying: 485/1024 [MB] (17 MBps) [2024-11-19T14:26:24.827Z] Copying: 513/1024 [MB] (28 MBps) [2024-11-19T14:26:25.774Z] Copying: 536/1024 [MB] (22 MBps) [2024-11-19T14:26:26.719Z] Copying: 560/1024 [MB] (23 MBps) [2024-11-19T14:26:28.108Z] Copying: 576/1024 [MB] (16 MBps) [2024-11-19T14:26:29.053Z] Copying: 594/1024 [MB] (17 MBps) [2024-11-19T14:26:29.997Z] Copying: 612/1024 [MB] (17 MBps) [2024-11-19T14:26:30.944Z] Copying: 630/1024 [MB] (18 MBps) [2024-11-19T14:26:31.890Z] Copying: 651/1024 [MB] (20 MBps) [2024-11-19T14:26:32.834Z] Copying: 672/1024 [MB] (21 MBps) [2024-11-19T14:26:33.824Z] Copying: 688/1024 [MB] (16 MBps) [2024-11-19T14:26:34.803Z] Copying: 708/1024 [MB] (20 MBps) [2024-11-19T14:26:35.747Z] Copying: 727/1024 [MB] (18 MBps) [2024-11-19T14:26:37.130Z] Copying: 746/1024 [MB] (18 MBps) [2024-11-19T14:26:38.074Z] Copying: 757/1024 [MB] (11 MBps) [2024-11-19T14:26:39.017Z] Copying: 783/1024 [MB] (26 MBps) [2024-11-19T14:26:39.962Z] Copying: 805/1024 [MB] (22 MBps) [2024-11-19T14:26:40.908Z] Copying: 824/1024 [MB] (19 MBps) [2024-11-19T14:26:41.853Z] Copying: 841/1024 [MB] (16 MBps) [2024-11-19T14:26:42.800Z] Copying: 862/1024 [MB] (20 MBps) [2024-11-19T14:26:43.746Z] Copying: 882/1024 [MB] (20 MBps) [2024-11-19T14:26:45.137Z] Copying: 895/1024 [MB] (12 MBps) [2024-11-19T14:26:45.710Z] Copying: 912/1024 [MB] (17 MBps) [2024-11-19T14:26:47.101Z] Copying: 930/1024 [MB] (18 MBps) [2024-11-19T14:26:48.047Z] Copying: 947/1024 [MB] (17 MBps) [2024-11-19T14:26:48.991Z] Copying: 967/1024 [MB] (20 MBps) [2024-11-19T14:26:49.935Z] Copying: 985/1024 [MB] (18 MBps) [2024-11-19T14:26:50.881Z] Copying: 1001/1024 [MB] (15 MBps) [2024-11-19T14:26:51.454Z] Copying: 1017/1024 [MB] (15 MBps) [2024-11-19T14:26:51.716Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-19 14:26:51.637698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.154 [2024-11-19 14:26:51.638095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:53.154 [2024-11-19 14:26:51.638252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:53.154 [2024-11-19 14:26:51.638284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.154 [2024-11-19 14:26:51.638338] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:53.154 [2024-11-19 14:26:51.641352] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.154 [2024-11-19 14:26:51.641528] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:53.154 [2024-11-19 14:26:51.641596] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.841 ms 00:25:53.154 [2024-11-19 14:26:51.641620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.154 [2024-11-19 14:26:51.641912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.154 [2024-11-19 14:26:51.641944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:53.154 [2024-11-19 14:26:51.641955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:25:53.154 [2024-11-19 14:26:51.641964] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.154 [2024-11-19 14:26:51.646091] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.154 [2024-11-19 14:26:51.646195] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:53.154 [2024-11-19 14:26:51.646262] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.109 ms 00:25:53.154 [2024-11-19 14:26:51.646286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.154 [2024-11-19 14:26:51.652480] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.154 [2024-11-19 14:26:51.652622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:25:53.154 [2024-11-19 14:26:51.653531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.155 ms 00:25:53.154 [2024-11-19 14:26:51.653634] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.154 [2024-11-19 14:26:51.682988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.154 [2024-11-19 14:26:51.683149] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:53.154 [2024-11-19 14:26:51.683208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.239 ms 00:25:53.154 [2024-11-19 14:26:51.683231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.154 [2024-11-19 14:26:51.702350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.154 [2024-11-19 14:26:51.702510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:53.154 [2024-11-19 14:26:51.702568] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.020 ms 00:25:53.154 [2024-11-19 14:26:51.702599] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.154 [2024-11-19 14:26:51.712482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.154 [2024-11-19 14:26:51.712621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:53.154 [2024-11-19 14:26:51.712676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.829 ms 00:25:53.154 [2024-11-19 14:26:51.712700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.417 [2024-11-19 14:26:51.740428] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.417 [2024-11-19 14:26:51.740588] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:53.417 [2024-11-19 14:26:51.740644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.697 ms 00:25:53.417 [2024-11-19 14:26:51.740666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.417 [2024-11-19 14:26:51.766245] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.417 [2024-11-19 14:26:51.766400] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:53.417 [2024-11-19 14:26:51.766472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.423 ms 00:25:53.417 [2024-11-19 14:26:51.766494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.417 [2024-11-19 14:26:51.791275] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.417 [2024-11-19 14:26:51.791453] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:53.417 [2024-11-19 14:26:51.791531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.447 ms 00:25:53.417 [2024-11-19 14:26:51.791557] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.417 [2024-11-19 14:26:51.816471] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.417 [2024-11-19 14:26:51.816629] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:53.417 [2024-11-19 14:26:51.816686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.820 ms 00:25:53.417 [2024-11-19 14:26:51.816708] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.417 [2024-11-19 14:26:51.817030] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:53.417 [2024-11-19 14:26:51.817114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:53.417 [2024-11-19 14:26:51.817248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3328 / 261120 wr_cnt: 1 state: open 00:25:53.417 [2024-11-19 14:26:51.817286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:53.417 [2024-11-19 14:26:51.817867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.817900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.817911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.817919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.817927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.817935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.817943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.817950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.817959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.817966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.817976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.817984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.817992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.817999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:53.418 [2024-11-19 14:26:51.818367] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:53.418 [2024-11-19 14:26:51.818375] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dda6fd23-b75b-45b2-8b0e-79979a296360 00:25:53.418 [2024-11-19 14:26:51.818383] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264448 00:25:53.418 [2024-11-19 14:26:51.818391] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:53.418 [2024-11-19 14:26:51.818398] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:53.418 [2024-11-19 14:26:51.818411] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:53.418 [2024-11-19 14:26:51.818419] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:53.418 [2024-11-19 14:26:51.818427] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:53.418 [2024-11-19 14:26:51.818435] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:53.418 [2024-11-19 14:26:51.818452] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:53.418 [2024-11-19 14:26:51.818459] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:53.418 [2024-11-19 14:26:51.818468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.418 [2024-11-19 14:26:51.818478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:53.418 [2024-11-19 14:26:51.818491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.444 ms 00:25:53.418 [2024-11-19 14:26:51.818500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.418 [2024-11-19 14:26:51.832050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.418 [2024-11-19 14:26:51.832187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:53.418 [2024-11-19 14:26:51.832204] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.488 ms 00:25:53.419 [2024-11-19 14:26:51.832212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.419 [2024-11-19 14:26:51.832435] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.419 [2024-11-19 14:26:51.832446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:53.419 [2024-11-19 14:26:51.832455] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:25:53.419 [2024-11-19 14:26:51.832463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.419 [2024-11-19 14:26:51.871423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.419 [2024-11-19 14:26:51.871604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:53.419 [2024-11-19 14:26:51.871624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.419 [2024-11-19 14:26:51.871632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.419 [2024-11-19 14:26:51.871702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.419 [2024-11-19 14:26:51.871711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:53.419 [2024-11-19 14:26:51.871719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.419 [2024-11-19 14:26:51.871727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.419 [2024-11-19 14:26:51.871800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.419 [2024-11-19 14:26:51.871813] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:53.419 [2024-11-19 14:26:51.871821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.419 [2024-11-19 14:26:51.871830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.419 [2024-11-19 14:26:51.871851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.419 [2024-11-19 14:26:51.871859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:53.419 [2024-11-19 14:26:51.871869] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.419 [2024-11-19 14:26:51.871906] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.419 [2024-11-19 14:26:51.953407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.419 [2024-11-19 14:26:51.953462] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:53.419 [2024-11-19 14:26:51.953475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.419 [2024-11-19 14:26:51.953483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.681 [2024-11-19 14:26:51.985854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.681 [2024-11-19 14:26:51.986059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:53.681 [2024-11-19 14:26:51.986077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.681 [2024-11-19 14:26:51.986086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.681 [2024-11-19 14:26:51.986154] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.681 [2024-11-19 14:26:51.986164] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:53.681 [2024-11-19 14:26:51.986172] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.681 [2024-11-19 14:26:51.986180] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.681 [2024-11-19 14:26:51.986222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.681 [2024-11-19 14:26:51.986232] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:53.681 [2024-11-19 14:26:51.986249] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.681 [2024-11-19 14:26:51.986257] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.681 [2024-11-19 14:26:51.986361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.681 [2024-11-19 14:26:51.986372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:53.681 [2024-11-19 14:26:51.986383] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.681 [2024-11-19 14:26:51.986391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.681 [2024-11-19 14:26:51.986421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.681 [2024-11-19 14:26:51.986430] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:53.681 [2024-11-19 14:26:51.986444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.681 [2024-11-19 14:26:51.986453] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.681 [2024-11-19 14:26:51.986495] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.681 [2024-11-19 14:26:51.986504] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:53.681 [2024-11-19 14:26:51.986513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.681 [2024-11-19 14:26:51.986522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.681 [2024-11-19 14:26:51.986572] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.681 [2024-11-19 14:26:51.986584] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:53.681 [2024-11-19 14:26:51.986596] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.681 [2024-11-19 14:26:51.986605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.681 [2024-11-19 14:26:51.986735] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 349.008 ms, result 0 00:25:54.629 00:25:54.629 00:25:54.629 14:26:52 -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:56.545 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:25:56.545 14:26:55 -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:25:56.545 14:26:55 -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:25:56.545 14:26:55 -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:56.545 14:26:55 -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:56.805 14:26:55 -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:56.805 14:26:55 -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:56.805 14:26:55 -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:56.805 Process with pid 76024 is not found 00:25:56.805 14:26:55 -- ftl/dirty_shutdown.sh@37 -- # killprocess 76024 00:25:56.805 14:26:55 -- common/autotest_common.sh@936 -- # '[' -z 76024 ']' 00:25:56.805 14:26:55 -- common/autotest_common.sh@940 -- # kill -0 76024 00:25:56.805 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (76024) - No such process 00:25:56.805 14:26:55 -- common/autotest_common.sh@963 -- # echo 'Process with pid 76024 is not found' 00:25:56.805 14:26:55 -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:25:57.065 Remove shared memory files 00:25:57.065 14:26:55 -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:25:57.065 14:26:55 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:57.065 14:26:55 -- ftl/common.sh@205 -- # rm -f rm -f 00:25:57.065 14:26:55 -- ftl/common.sh@206 -- # rm -f rm -f 00:25:57.065 14:26:55 -- ftl/common.sh@207 -- # rm -f rm -f 00:25:57.065 14:26:55 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:57.065 14:26:55 -- ftl/common.sh@209 -- # rm -f rm -f 00:25:57.065 ************************************ 00:25:57.065 END TEST ftl_dirty_shutdown 00:25:57.065 ************************************ 00:25:57.065 00:25:57.065 real 3m57.483s 00:25:57.065 user 4m19.603s 00:25:57.065 sys 0m26.528s 00:25:57.065 14:26:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:57.065 14:26:55 -- common/autotest_common.sh@10 -- # set +x 00:25:57.327 14:26:55 -- ftl/ftl.sh@79 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:25:57.327 14:26:55 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:25:57.327 14:26:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:57.327 14:26:55 -- common/autotest_common.sh@10 -- # set +x 00:25:57.327 ************************************ 00:25:57.327 START TEST ftl_upgrade_shutdown 00:25:57.327 ************************************ 00:25:57.327 14:26:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:25:57.327 * Looking for test storage... 00:25:57.327 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:57.327 14:26:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:57.327 14:26:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:57.327 14:26:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:57.327 14:26:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:57.327 14:26:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:57.327 14:26:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:57.327 14:26:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:57.327 14:26:55 -- scripts/common.sh@335 -- # IFS=.-: 00:25:57.327 14:26:55 -- scripts/common.sh@335 -- # read -ra ver1 00:25:57.327 14:26:55 -- scripts/common.sh@336 -- # IFS=.-: 00:25:57.327 14:26:55 -- scripts/common.sh@336 -- # read -ra ver2 00:25:57.327 14:26:55 -- scripts/common.sh@337 -- # local 'op=<' 00:25:57.327 14:26:55 -- scripts/common.sh@339 -- # ver1_l=2 00:25:57.327 14:26:55 -- scripts/common.sh@340 -- # ver2_l=1 00:25:57.327 14:26:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:57.327 14:26:55 -- scripts/common.sh@343 -- # case "$op" in 00:25:57.327 14:26:55 -- scripts/common.sh@344 -- # : 1 00:25:57.327 14:26:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:57.327 14:26:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:57.327 14:26:55 -- scripts/common.sh@364 -- # decimal 1 00:25:57.327 14:26:55 -- scripts/common.sh@352 -- # local d=1 00:25:57.327 14:26:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:57.327 14:26:55 -- scripts/common.sh@354 -- # echo 1 00:25:57.327 14:26:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:57.327 14:26:55 -- scripts/common.sh@365 -- # decimal 2 00:25:57.327 14:26:55 -- scripts/common.sh@352 -- # local d=2 00:25:57.327 14:26:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:57.327 14:26:55 -- scripts/common.sh@354 -- # echo 2 00:25:57.327 14:26:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:57.327 14:26:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:57.327 14:26:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:57.327 14:26:55 -- scripts/common.sh@367 -- # return 0 00:25:57.327 14:26:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:57.327 14:26:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:57.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.327 --rc genhtml_branch_coverage=1 00:25:57.327 --rc genhtml_function_coverage=1 00:25:57.327 --rc genhtml_legend=1 00:25:57.327 --rc geninfo_all_blocks=1 00:25:57.327 --rc geninfo_unexecuted_blocks=1 00:25:57.327 00:25:57.327 ' 00:25:57.327 14:26:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:57.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.327 --rc genhtml_branch_coverage=1 00:25:57.327 --rc genhtml_function_coverage=1 00:25:57.327 --rc genhtml_legend=1 00:25:57.327 --rc geninfo_all_blocks=1 00:25:57.327 --rc geninfo_unexecuted_blocks=1 00:25:57.327 00:25:57.327 ' 00:25:57.327 14:26:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:57.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.327 --rc genhtml_branch_coverage=1 00:25:57.327 --rc genhtml_function_coverage=1 00:25:57.327 --rc genhtml_legend=1 00:25:57.327 --rc geninfo_all_blocks=1 00:25:57.327 --rc geninfo_unexecuted_blocks=1 00:25:57.327 00:25:57.327 ' 00:25:57.327 14:26:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:57.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.327 --rc genhtml_branch_coverage=1 00:25:57.327 --rc genhtml_function_coverage=1 00:25:57.327 --rc genhtml_legend=1 00:25:57.327 --rc geninfo_all_blocks=1 00:25:57.327 --rc geninfo_unexecuted_blocks=1 00:25:57.327 00:25:57.327 ' 00:25:57.327 14:26:55 -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:57.327 14:26:55 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:25:57.327 14:26:55 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:57.328 14:26:55 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:57.328 14:26:55 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:57.328 14:26:55 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:57.328 14:26:55 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:57.328 14:26:55 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:57.328 14:26:55 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:57.328 14:26:55 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:57.328 14:26:55 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:57.328 14:26:55 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:57.328 14:26:55 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:57.328 14:26:55 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:57.328 14:26:55 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:57.328 14:26:55 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:57.328 14:26:55 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:57.328 14:26:55 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:57.328 14:26:55 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:57.328 14:26:55 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:57.328 14:26:55 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:57.328 14:26:55 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:57.328 14:26:55 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:57.328 14:26:55 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:57.328 14:26:55 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:57.328 14:26:55 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:57.328 14:26:55 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:57.328 14:26:55 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:57.328 14:26:55 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:57.328 14:26:55 -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:57.328 14:26:55 -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:25:57.328 14:26:55 -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:25:57.328 14:26:55 -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:07.0 00:25:57.328 14:26:55 -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:07.0 00:25:57.328 14:26:55 -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:25:57.328 14:26:55 -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:25:57.328 14:26:55 -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:06.0 00:25:57.328 14:26:55 -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:06.0 00:25:57.328 14:26:55 -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:25:57.328 14:26:55 -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:25:57.328 14:26:55 -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:25:57.328 14:26:55 -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:25:57.328 14:26:55 -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:25:57.328 14:26:55 -- ftl/common.sh@81 -- # local base_bdev= 00:25:57.328 14:26:55 -- ftl/common.sh@82 -- # local cache_bdev= 00:25:57.328 14:26:55 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:57.328 14:26:55 -- ftl/common.sh@89 -- # spdk_tgt_pid=78614 00:25:57.328 14:26:55 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:25:57.328 14:26:55 -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:25:57.328 14:26:55 -- ftl/common.sh@91 -- # waitforlisten 78614 00:25:57.328 14:26:55 -- common/autotest_common.sh@829 -- # '[' -z 78614 ']' 00:25:57.328 14:26:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:57.328 14:26:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:57.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:57.328 14:26:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:57.328 14:26:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:57.328 14:26:55 -- common/autotest_common.sh@10 -- # set +x 00:25:57.589 [2024-11-19 14:26:55.903702] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:57.589 [2024-11-19 14:26:55.904033] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78614 ] 00:25:57.589 [2024-11-19 14:26:56.061275] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.850 [2024-11-19 14:26:56.284080] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:57.850 [2024-11-19 14:26:56.284449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.236 14:26:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:59.236 14:26:57 -- common/autotest_common.sh@862 -- # return 0 00:25:59.236 14:26:57 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:59.236 14:26:57 -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:25:59.236 14:26:57 -- ftl/common.sh@99 -- # local params 00:25:59.236 14:26:57 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:59.236 14:26:57 -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:25:59.236 14:26:57 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:59.236 14:26:57 -- ftl/common.sh@101 -- # [[ -z 0000:00:07.0 ]] 00:25:59.236 14:26:57 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:59.236 14:26:57 -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:25:59.236 14:26:57 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:59.236 14:26:57 -- ftl/common.sh@101 -- # [[ -z 0000:00:06.0 ]] 00:25:59.236 14:26:57 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:59.236 14:26:57 -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:25:59.236 14:26:57 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:59.236 14:26:57 -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:25:59.236 14:26:57 -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:07.0 20480 00:25:59.236 14:26:57 -- ftl/common.sh@54 -- # local name=base 00:25:59.236 14:26:57 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:25:59.236 14:26:57 -- ftl/common.sh@56 -- # local size=20480 00:25:59.236 14:26:57 -- ftl/common.sh@59 -- # local base_bdev 00:25:59.236 14:26:57 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:07.0 00:25:59.236 14:26:57 -- ftl/common.sh@60 -- # base_bdev=basen1 00:25:59.236 14:26:57 -- ftl/common.sh@62 -- # local base_size 00:25:59.236 14:26:57 -- ftl/common.sh@63 -- # get_bdev_size basen1 00:25:59.236 14:26:57 -- common/autotest_common.sh@1367 -- # local bdev_name=basen1 00:25:59.236 14:26:57 -- common/autotest_common.sh@1368 -- # local bdev_info 00:25:59.236 14:26:57 -- common/autotest_common.sh@1369 -- # local bs 00:25:59.236 14:26:57 -- common/autotest_common.sh@1370 -- # local nb 00:25:59.236 14:26:57 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:25:59.498 14:26:57 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:25:59.498 { 00:25:59.498 "name": "basen1", 00:25:59.498 "aliases": [ 00:25:59.498 "4e0bbe7d-2f03-450b-a632-b7402b40c8d8" 00:25:59.498 ], 00:25:59.498 "product_name": "NVMe disk", 00:25:59.498 "block_size": 4096, 00:25:59.498 "num_blocks": 1310720, 00:25:59.498 "uuid": "4e0bbe7d-2f03-450b-a632-b7402b40c8d8", 00:25:59.498 "assigned_rate_limits": { 00:25:59.498 "rw_ios_per_sec": 0, 00:25:59.498 "rw_mbytes_per_sec": 0, 00:25:59.498 "r_mbytes_per_sec": 0, 00:25:59.498 "w_mbytes_per_sec": 0 00:25:59.498 }, 00:25:59.498 "claimed": true, 00:25:59.498 "claim_type": "read_many_write_one", 00:25:59.498 "zoned": false, 00:25:59.498 "supported_io_types": { 00:25:59.498 "read": true, 00:25:59.498 "write": true, 00:25:59.498 "unmap": true, 00:25:59.498 "write_zeroes": true, 00:25:59.498 "flush": true, 00:25:59.498 "reset": true, 00:25:59.498 "compare": true, 00:25:59.498 "compare_and_write": false, 00:25:59.498 "abort": true, 00:25:59.498 "nvme_admin": true, 00:25:59.498 "nvme_io": true 00:25:59.498 }, 00:25:59.498 "driver_specific": { 00:25:59.498 "nvme": [ 00:25:59.498 { 00:25:59.498 "pci_address": "0000:00:07.0", 00:25:59.498 "trid": { 00:25:59.498 "trtype": "PCIe", 00:25:59.498 "traddr": "0000:00:07.0" 00:25:59.498 }, 00:25:59.498 "ctrlr_data": { 00:25:59.498 "cntlid": 0, 00:25:59.498 "vendor_id": "0x1b36", 00:25:59.498 "model_number": "QEMU NVMe Ctrl", 00:25:59.498 "serial_number": "12341", 00:25:59.498 "firmware_revision": "8.0.0", 00:25:59.498 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:59.498 "oacs": { 00:25:59.498 "security": 0, 00:25:59.498 "format": 1, 00:25:59.498 "firmware": 0, 00:25:59.498 "ns_manage": 1 00:25:59.498 }, 00:25:59.498 "multi_ctrlr": false, 00:25:59.498 "ana_reporting": false 00:25:59.498 }, 00:25:59.498 "vs": { 00:25:59.498 "nvme_version": "1.4" 00:25:59.498 }, 00:25:59.498 "ns_data": { 00:25:59.498 "id": 1, 00:25:59.498 "can_share": false 00:25:59.498 } 00:25:59.498 } 00:25:59.498 ], 00:25:59.498 "mp_policy": "active_passive" 00:25:59.498 } 00:25:59.498 } 00:25:59.498 ]' 00:25:59.498 14:26:57 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:25:59.498 14:26:57 -- common/autotest_common.sh@1372 -- # bs=4096 00:25:59.498 14:26:57 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:25:59.498 14:26:57 -- common/autotest_common.sh@1373 -- # nb=1310720 00:25:59.498 14:26:57 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:25:59.498 14:26:57 -- common/autotest_common.sh@1377 -- # echo 5120 00:25:59.498 14:26:57 -- ftl/common.sh@63 -- # base_size=5120 00:25:59.498 14:26:57 -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:25:59.498 14:26:57 -- ftl/common.sh@67 -- # clear_lvols 00:25:59.498 14:26:57 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:59.498 14:26:57 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:59.758 14:26:58 -- ftl/common.sh@28 -- # stores=faae1519-34d0-43f5-b460-d8730dfb7b41 00:25:59.758 14:26:58 -- ftl/common.sh@29 -- # for lvs in $stores 00:25:59.758 14:26:58 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u faae1519-34d0-43f5-b460-d8730dfb7b41 00:26:00.019 14:26:58 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:26:00.019 14:26:58 -- ftl/common.sh@68 -- # lvs=37627e24-7f62-4eb5-9d70-b296158c7d7c 00:26:00.019 14:26:58 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 37627e24-7f62-4eb5-9d70-b296158c7d7c 00:26:00.280 14:26:58 -- ftl/common.sh@107 -- # base_bdev=9a14e7da-418f-441d-93bf-86aec0cd1242 00:26:00.280 14:26:58 -- ftl/common.sh@108 -- # [[ -z 9a14e7da-418f-441d-93bf-86aec0cd1242 ]] 00:26:00.280 14:26:58 -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:06.0 9a14e7da-418f-441d-93bf-86aec0cd1242 5120 00:26:00.280 14:26:58 -- ftl/common.sh@35 -- # local name=cache 00:26:00.280 14:26:58 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:26:00.280 14:26:58 -- ftl/common.sh@37 -- # local base_bdev=9a14e7da-418f-441d-93bf-86aec0cd1242 00:26:00.280 14:26:58 -- ftl/common.sh@38 -- # local cache_size=5120 00:26:00.280 14:26:58 -- ftl/common.sh@41 -- # get_bdev_size 9a14e7da-418f-441d-93bf-86aec0cd1242 00:26:00.280 14:26:58 -- common/autotest_common.sh@1367 -- # local bdev_name=9a14e7da-418f-441d-93bf-86aec0cd1242 00:26:00.280 14:26:58 -- common/autotest_common.sh@1368 -- # local bdev_info 00:26:00.280 14:26:58 -- common/autotest_common.sh@1369 -- # local bs 00:26:00.280 14:26:58 -- common/autotest_common.sh@1370 -- # local nb 00:26:00.280 14:26:58 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9a14e7da-418f-441d-93bf-86aec0cd1242 00:26:00.541 14:26:58 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:26:00.541 { 00:26:00.541 "name": "9a14e7da-418f-441d-93bf-86aec0cd1242", 00:26:00.541 "aliases": [ 00:26:00.541 "lvs/basen1p0" 00:26:00.541 ], 00:26:00.541 "product_name": "Logical Volume", 00:26:00.541 "block_size": 4096, 00:26:00.541 "num_blocks": 5242880, 00:26:00.542 "uuid": "9a14e7da-418f-441d-93bf-86aec0cd1242", 00:26:00.542 "assigned_rate_limits": { 00:26:00.542 "rw_ios_per_sec": 0, 00:26:00.542 "rw_mbytes_per_sec": 0, 00:26:00.542 "r_mbytes_per_sec": 0, 00:26:00.542 "w_mbytes_per_sec": 0 00:26:00.542 }, 00:26:00.542 "claimed": false, 00:26:00.542 "zoned": false, 00:26:00.542 "supported_io_types": { 00:26:00.542 "read": true, 00:26:00.542 "write": true, 00:26:00.542 "unmap": true, 00:26:00.542 "write_zeroes": true, 00:26:00.542 "flush": false, 00:26:00.542 "reset": true, 00:26:00.542 "compare": false, 00:26:00.542 "compare_and_write": false, 00:26:00.542 "abort": false, 00:26:00.542 "nvme_admin": false, 00:26:00.542 "nvme_io": false 00:26:00.542 }, 00:26:00.542 "driver_specific": { 00:26:00.542 "lvol": { 00:26:00.542 "lvol_store_uuid": "37627e24-7f62-4eb5-9d70-b296158c7d7c", 00:26:00.542 "base_bdev": "basen1", 00:26:00.542 "thin_provision": true, 00:26:00.542 "snapshot": false, 00:26:00.542 "clone": false, 00:26:00.542 "esnap_clone": false 00:26:00.542 } 00:26:00.542 } 00:26:00.542 } 00:26:00.542 ]' 00:26:00.542 14:26:58 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:26:00.542 14:26:58 -- common/autotest_common.sh@1372 -- # bs=4096 00:26:00.542 14:26:58 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:26:00.542 14:26:58 -- common/autotest_common.sh@1373 -- # nb=5242880 00:26:00.542 14:26:58 -- common/autotest_common.sh@1376 -- # bdev_size=20480 00:26:00.542 14:26:58 -- common/autotest_common.sh@1377 -- # echo 20480 00:26:00.542 14:26:58 -- ftl/common.sh@41 -- # local base_size=1024 00:26:00.542 14:26:58 -- ftl/common.sh@44 -- # local nvc_bdev 00:26:00.542 14:26:58 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:06.0 00:26:00.809 14:26:59 -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:26:00.809 14:26:59 -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:26:00.809 14:26:59 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:26:01.073 14:26:59 -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:26:01.073 14:26:59 -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:26:01.073 14:26:59 -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 9a14e7da-418f-441d-93bf-86aec0cd1242 -c cachen1p0 --l2p_dram_limit 2 00:26:01.073 [2024-11-19 14:26:59.566355] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.073 [2024-11-19 14:26:59.566469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:01.073 [2024-11-19 14:26:59.566487] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:01.073 [2024-11-19 14:26:59.566495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.073 [2024-11-19 14:26:59.566533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.073 [2024-11-19 14:26:59.566541] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:01.073 [2024-11-19 14:26:59.566549] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:26:01.073 [2024-11-19 14:26:59.566554] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.073 [2024-11-19 14:26:59.566571] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:01.073 [2024-11-19 14:26:59.567126] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:01.073 [2024-11-19 14:26:59.567141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.073 [2024-11-19 14:26:59.567147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:01.073 [2024-11-19 14:26:59.567156] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.573 ms 00:26:01.073 [2024-11-19 14:26:59.567162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.073 [2024-11-19 14:26:59.567185] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 37802868-17ca-49ed-ba1a-3910b4de473b 00:26:01.073 [2024-11-19 14:26:59.568108] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.073 [2024-11-19 14:26:59.568130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:26:01.073 [2024-11-19 14:26:59.568137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:26:01.073 [2024-11-19 14:26:59.568145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.073 [2024-11-19 14:26:59.572711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.073 [2024-11-19 14:26:59.572737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:01.073 [2024-11-19 14:26:59.572745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.532 ms 00:26:01.073 [2024-11-19 14:26:59.572751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.073 [2024-11-19 14:26:59.572809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.073 [2024-11-19 14:26:59.572817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:01.073 [2024-11-19 14:26:59.572824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:26:01.073 [2024-11-19 14:26:59.572833] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.073 [2024-11-19 14:26:59.572863] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.073 [2024-11-19 14:26:59.572873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:01.073 [2024-11-19 14:26:59.572890] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:01.073 [2024-11-19 14:26:59.572897] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.073 [2024-11-19 14:26:59.572914] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:01.073 [2024-11-19 14:26:59.575924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.073 [2024-11-19 14:26:59.576001] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:01.073 [2024-11-19 14:26:59.576045] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.013 ms 00:26:01.073 [2024-11-19 14:26:59.576063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.073 [2024-11-19 14:26:59.576108] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.073 [2024-11-19 14:26:59.576124] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:01.073 [2024-11-19 14:26:59.576140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:01.073 [2024-11-19 14:26:59.576178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.073 [2024-11-19 14:26:59.576211] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:26:01.073 [2024-11-19 14:26:59.576307] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:26:01.073 [2024-11-19 14:26:59.576336] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:01.073 [2024-11-19 14:26:59.576393] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:26:01.073 [2024-11-19 14:26:59.576427] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:01.073 [2024-11-19 14:26:59.576451] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:01.073 [2024-11-19 14:26:59.576478] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:01.073 [2024-11-19 14:26:59.576513] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:01.073 [2024-11-19 14:26:59.576532] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:26:01.073 [2024-11-19 14:26:59.576547] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:26:01.073 [2024-11-19 14:26:59.576565] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.073 [2024-11-19 14:26:59.576673] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:01.073 [2024-11-19 14:26:59.576693] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.355 ms 00:26:01.073 [2024-11-19 14:26:59.576709] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.073 [2024-11-19 14:26:59.576768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.073 [2024-11-19 14:26:59.576786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:01.073 [2024-11-19 14:26:59.576802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:26:01.073 [2024-11-19 14:26:59.576898] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.073 [2024-11-19 14:26:59.576978] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:01.073 [2024-11-19 14:26:59.576998] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:01.073 [2024-11-19 14:26:59.577015] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:01.073 [2024-11-19 14:26:59.577114] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:01.073 [2024-11-19 14:26:59.577134] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:01.073 [2024-11-19 14:26:59.577149] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:01.073 [2024-11-19 14:26:59.577165] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:01.073 [2024-11-19 14:26:59.577178] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:01.073 [2024-11-19 14:26:59.577193] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:01.073 [2024-11-19 14:26:59.577208] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:01.073 [2024-11-19 14:26:59.577260] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:01.073 [2024-11-19 14:26:59.577277] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:01.073 [2024-11-19 14:26:59.577294] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:01.073 [2024-11-19 14:26:59.577309] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:01.073 [2024-11-19 14:26:59.577324] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:26:01.073 [2024-11-19 14:26:59.577338] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:01.073 [2024-11-19 14:26:59.577354] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:01.073 [2024-11-19 14:26:59.577395] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:26:01.073 [2024-11-19 14:26:59.577413] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:01.073 [2024-11-19 14:26:59.577427] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:26:01.073 [2024-11-19 14:26:59.577442] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:26:01.073 [2024-11-19 14:26:59.577457] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:26:01.073 [2024-11-19 14:26:59.577473] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:01.073 [2024-11-19 14:26:59.577487] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:01.073 [2024-11-19 14:26:59.577521] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:01.073 [2024-11-19 14:26:59.577538] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:01.073 [2024-11-19 14:26:59.577553] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:26:01.074 [2024-11-19 14:26:59.577567] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:01.074 [2024-11-19 14:26:59.577606] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:01.074 [2024-11-19 14:26:59.577623] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:01.074 [2024-11-19 14:26:59.577638] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:01.074 [2024-11-19 14:26:59.577652] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:01.074 [2024-11-19 14:26:59.577668] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:26:01.074 [2024-11-19 14:26:59.577699] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:01.074 [2024-11-19 14:26:59.577717] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:01.074 [2024-11-19 14:26:59.577731] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:01.074 [2024-11-19 14:26:59.577747] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:01.074 [2024-11-19 14:26:59.577783] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:01.074 [2024-11-19 14:26:59.577801] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:26:01.074 [2024-11-19 14:26:59.577816] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:01.074 [2024-11-19 14:26:59.577831] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:01.074 [2024-11-19 14:26:59.577846] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:01.074 [2024-11-19 14:26:59.577890] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:01.074 [2024-11-19 14:26:59.577909] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:01.074 [2024-11-19 14:26:59.577928] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:01.074 [2024-11-19 14:26:59.577967] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:01.074 [2024-11-19 14:26:59.577976] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:01.074 [2024-11-19 14:26:59.577982] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:01.074 [2024-11-19 14:26:59.577991] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:01.074 [2024-11-19 14:26:59.577996] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:01.074 [2024-11-19 14:26:59.578005] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:01.074 [2024-11-19 14:26:59.578013] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:01.074 [2024-11-19 14:26:59.578021] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:01.074 [2024-11-19 14:26:59.578027] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:26:01.074 [2024-11-19 14:26:59.578035] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:26:01.074 [2024-11-19 14:26:59.578040] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:26:01.074 [2024-11-19 14:26:59.578048] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:26:01.074 [2024-11-19 14:26:59.578054] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:26:01.074 [2024-11-19 14:26:59.578060] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:26:01.074 [2024-11-19 14:26:59.578065] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:26:01.074 [2024-11-19 14:26:59.578072] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:26:01.074 [2024-11-19 14:26:59.578077] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:26:01.074 [2024-11-19 14:26:59.578085] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:26:01.074 [2024-11-19 14:26:59.578090] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:26:01.074 [2024-11-19 14:26:59.578099] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:26:01.074 [2024-11-19 14:26:59.578104] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:01.074 [2024-11-19 14:26:59.578112] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:01.074 [2024-11-19 14:26:59.578119] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:01.074 [2024-11-19 14:26:59.578125] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:01.074 [2024-11-19 14:26:59.578131] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:01.074 [2024-11-19 14:26:59.578137] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:01.074 [2024-11-19 14:26:59.578143] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.074 [2024-11-19 14:26:59.578150] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:01.074 [2024-11-19 14:26:59.578156] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.203 ms 00:26:01.074 [2024-11-19 14:26:59.578163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.074 [2024-11-19 14:26:59.589858] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.074 [2024-11-19 14:26:59.589959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:01.074 [2024-11-19 14:26:59.589999] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.663 ms 00:26:01.074 [2024-11-19 14:26:59.590019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.074 [2024-11-19 14:26:59.590183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.074 [2024-11-19 14:26:59.590249] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:01.074 [2024-11-19 14:26:59.590302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:01.074 [2024-11-19 14:26:59.590322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.074 [2024-11-19 14:26:59.614037] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.074 [2024-11-19 14:26:59.614129] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:01.074 [2024-11-19 14:26:59.614174] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.673 ms 00:26:01.074 [2024-11-19 14:26:59.614194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.074 [2024-11-19 14:26:59.614227] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.074 [2024-11-19 14:26:59.614310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:01.074 [2024-11-19 14:26:59.614329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:01.074 [2024-11-19 14:26:59.614344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.074 [2024-11-19 14:26:59.614667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.074 [2024-11-19 14:26:59.614746] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:01.074 [2024-11-19 14:26:59.614785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.261 ms 00:26:01.074 [2024-11-19 14:26:59.614852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.074 [2024-11-19 14:26:59.614945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.074 [2024-11-19 14:26:59.614969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:01.074 [2024-11-19 14:26:59.615007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:26:01.074 [2024-11-19 14:26:59.615026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.074 [2024-11-19 14:26:59.626922] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.074 [2024-11-19 14:26:59.627006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:01.074 [2024-11-19 14:26:59.627042] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.867 ms 00:26:01.074 [2024-11-19 14:26:59.627061] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.335 [2024-11-19 14:26:59.635988] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:01.335 [2024-11-19 14:26:59.636697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.335 [2024-11-19 14:26:59.636720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:01.335 [2024-11-19 14:26:59.636729] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.570 ms 00:26:01.335 [2024-11-19 14:26:59.636734] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.335 [2024-11-19 14:26:59.659848] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.335 [2024-11-19 14:26:59.659885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:26:01.335 [2024-11-19 14:26:59.659897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.094 ms 00:26:01.335 [2024-11-19 14:26:59.659903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.335 [2024-11-19 14:26:59.659935] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] First startup needs to scrub nv cache data region, this may take some time. 00:26:01.335 [2024-11-19 14:26:59.659964] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 4GiB 00:26:05.544 [2024-11-19 14:27:03.740246] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.544 [2024-11-19 14:27:03.740325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:05.544 [2024-11-19 14:27:03.740346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4080.284 ms 00:26:05.544 [2024-11-19 14:27:03.740356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.544 [2024-11-19 14:27:03.740475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.544 [2024-11-19 14:27:03.740488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:05.544 [2024-11-19 14:27:03.740504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.065 ms 00:26:05.544 [2024-11-19 14:27:03.740513] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.544 [2024-11-19 14:27:03.765966] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.544 [2024-11-19 14:27:03.766019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:26:05.544 [2024-11-19 14:27:03.766036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 25.398 ms 00:26:05.544 [2024-11-19 14:27:03.766045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.544 [2024-11-19 14:27:03.791316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.544 [2024-11-19 14:27:03.791362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:26:05.544 [2024-11-19 14:27:03.791380] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 25.219 ms 00:26:05.544 [2024-11-19 14:27:03.791387] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.544 [2024-11-19 14:27:03.791763] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.544 [2024-11-19 14:27:03.791774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:05.544 [2024-11-19 14:27:03.791786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.330 ms 00:26:05.544 [2024-11-19 14:27:03.791793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.544 [2024-11-19 14:27:03.865690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.544 [2024-11-19 14:27:03.865899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:26:05.544 [2024-11-19 14:27:03.865928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 73.851 ms 00:26:05.544 [2024-11-19 14:27:03.865937] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.544 [2024-11-19 14:27:03.893371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.544 [2024-11-19 14:27:03.893552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:26:05.544 [2024-11-19 14:27:03.893577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 27.384 ms 00:26:05.544 [2024-11-19 14:27:03.893586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.544 [2024-11-19 14:27:03.896386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.544 [2024-11-19 14:27:03.896467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:26:05.544 [2024-11-19 14:27:03.896495] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.377 ms 00:26:05.544 [2024-11-19 14:27:03.896511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.544 [2024-11-19 14:27:03.926938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.544 [2024-11-19 14:27:03.927126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:05.544 [2024-11-19 14:27:03.927217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 30.319 ms 00:26:05.544 [2024-11-19 14:27:03.927242] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.544 [2024-11-19 14:27:03.927303] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.544 [2024-11-19 14:27:03.927327] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:05.544 [2024-11-19 14:27:03.927350] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:05.544 [2024-11-19 14:27:03.927369] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.544 [2024-11-19 14:27:03.927541] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.544 [2024-11-19 14:27:03.927556] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:05.544 [2024-11-19 14:27:03.927568] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:26:05.544 [2024-11-19 14:27:03.927576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.544 [2024-11-19 14:27:03.928802] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4361.944 ms, result 0 00:26:05.544 { 00:26:05.544 "name": "ftl", 00:26:05.544 "uuid": "37802868-17ca-49ed-ba1a-3910b4de473b" 00:26:05.544 } 00:26:05.545 14:27:03 -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:26:05.805 [2024-11-19 14:27:04.143790] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:05.805 14:27:04 -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:26:06.067 14:27:04 -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:26:06.067 [2024-11-19 14:27:04.556291] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:26:06.067 14:27:04 -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:26:06.328 [2024-11-19 14:27:04.749807] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:06.328 14:27:04 -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:26:06.589 Fill FTL, iteration 1 00:26:06.589 14:27:05 -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:26:06.589 14:27:05 -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:26:06.589 14:27:05 -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:26:06.589 14:27:05 -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:26:06.589 14:27:05 -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:26:06.589 14:27:05 -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:26:06.589 14:27:05 -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:26:06.590 14:27:05 -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:26:06.590 14:27:05 -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:26:06.590 14:27:05 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:06.590 14:27:05 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:26:06.590 14:27:05 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:06.590 14:27:05 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:06.590 14:27:05 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:06.590 14:27:05 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:06.590 14:27:05 -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:26:06.590 14:27:05 -- ftl/common.sh@163 -- # spdk_ini_pid=78746 00:26:06.590 14:27:05 -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:26:06.590 14:27:05 -- ftl/common.sh@164 -- # export spdk_ini_pid 00:26:06.590 14:27:05 -- ftl/common.sh@165 -- # waitforlisten 78746 /var/tmp/spdk.tgt.sock 00:26:06.590 14:27:05 -- common/autotest_common.sh@829 -- # '[' -z 78746 ']' 00:26:06.590 14:27:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:26:06.590 14:27:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:06.590 14:27:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:26:06.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:26:06.590 14:27:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:06.590 14:27:05 -- common/autotest_common.sh@10 -- # set +x 00:26:06.590 [2024-11-19 14:27:05.131657] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:06.590 [2024-11-19 14:27:05.131983] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78746 ] 00:26:06.851 [2024-11-19 14:27:05.276054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.112 [2024-11-19 14:27:05.445654] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:07.112 [2024-11-19 14:27:05.445994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:08.096 14:27:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:08.096 14:27:06 -- common/autotest_common.sh@862 -- # return 0 00:26:08.096 14:27:06 -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:26:08.357 ftln1 00:26:08.357 14:27:06 -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:26:08.357 14:27:06 -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:26:08.619 14:27:07 -- ftl/common.sh@173 -- # echo ']}' 00:26:08.619 14:27:07 -- ftl/common.sh@176 -- # killprocess 78746 00:26:08.619 14:27:07 -- common/autotest_common.sh@936 -- # '[' -z 78746 ']' 00:26:08.619 14:27:07 -- common/autotest_common.sh@940 -- # kill -0 78746 00:26:08.619 14:27:07 -- common/autotest_common.sh@941 -- # uname 00:26:08.619 14:27:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:08.619 14:27:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78746 00:26:08.619 killing process with pid 78746 00:26:08.619 14:27:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:08.619 14:27:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:08.619 14:27:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78746' 00:26:08.619 14:27:07 -- common/autotest_common.sh@955 -- # kill 78746 00:26:08.619 14:27:07 -- common/autotest_common.sh@960 -- # wait 78746 00:26:10.534 14:27:08 -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:26:10.534 14:27:08 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:10.534 [2024-11-19 14:27:08.729552] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:10.534 [2024-11-19 14:27:08.729659] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78805 ] 00:26:10.534 [2024-11-19 14:27:08.876909] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.534 [2024-11-19 14:27:09.074002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:11.925  [2024-11-19T14:27:11.863Z] Copying: 192/1024 [MB] (192 MBps) [2024-11-19T14:27:12.798Z] Copying: 432/1024 [MB] (240 MBps) [2024-11-19T14:27:13.733Z] Copying: 668/1024 [MB] (236 MBps) [2024-11-19T14:27:13.991Z] Copying: 920/1024 [MB] (252 MBps) [2024-11-19T14:27:14.926Z] Copying: 1024/1024 [MB] (average 230 MBps) 00:26:16.364 00:26:16.364 Calculate MD5 checksum, iteration 1 00:26:16.364 14:27:14 -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:26:16.364 14:27:14 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:26:16.364 14:27:14 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:16.364 14:27:14 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:16.364 14:27:14 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:16.364 14:27:14 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:16.364 14:27:14 -- ftl/common.sh@154 -- # return 0 00:26:16.364 14:27:14 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:16.364 [2024-11-19 14:27:14.636210] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:16.364 [2024-11-19 14:27:14.636316] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78864 ] 00:26:16.364 [2024-11-19 14:27:14.782496] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.623 [2024-11-19 14:27:14.948663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.998  [2024-11-19T14:27:17.127Z] Copying: 656/1024 [MB] (656 MBps) [2024-11-19T14:27:17.694Z] Copying: 1024/1024 [MB] (average 637 MBps) 00:26:19.132 00:26:19.132 14:27:17 -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:26:19.132 14:27:17 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:21.677 14:27:19 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:21.677 14:27:19 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=5b888c98966fe3ce71ecc804bcfc6563 00:26:21.677 14:27:19 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:21.677 14:27:19 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:21.677 Fill FTL, iteration 2 00:26:21.677 14:27:19 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:26:21.677 14:27:19 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:21.677 14:27:19 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:21.677 14:27:19 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:21.677 14:27:19 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:21.677 14:27:19 -- ftl/common.sh@154 -- # return 0 00:26:21.677 14:27:19 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:21.677 [2024-11-19 14:27:19.678212] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:21.678 [2024-11-19 14:27:19.678537] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78923 ] 00:26:21.678 [2024-11-19 14:27:19.826741] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.678 [2024-11-19 14:27:20.026594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:23.064  [2024-11-19T14:27:22.564Z] Copying: 194/1024 [MB] (194 MBps) [2024-11-19T14:27:23.498Z] Copying: 386/1024 [MB] (192 MBps) [2024-11-19T14:27:24.434Z] Copying: 603/1024 [MB] (217 MBps) [2024-11-19T14:27:25.370Z] Copying: 835/1024 [MB] (232 MBps) [2024-11-19T14:27:25.940Z] Copying: 1024/1024 [MB] (average 213 MBps) 00:26:27.378 00:26:27.378 Calculate MD5 checksum, iteration 2 00:26:27.378 14:27:25 -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:26:27.378 14:27:25 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:26:27.378 14:27:25 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:27.378 14:27:25 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:27.378 14:27:25 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:27.378 14:27:25 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:27.378 14:27:25 -- ftl/common.sh@154 -- # return 0 00:26:27.378 14:27:25 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:27.378 [2024-11-19 14:27:25.935097] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:27.378 [2024-11-19 14:27:25.935197] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78994 ] 00:26:27.637 [2024-11-19 14:27:26.082210] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.896 [2024-11-19 14:27:26.246533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.274  [2024-11-19T14:27:28.403Z] Copying: 595/1024 [MB] (595 MBps) [2024-11-19T14:27:30.320Z] Copying: 1024/1024 [MB] (average 602 MBps) 00:26:31.758 00:26:31.758 14:27:29 -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:26:31.758 14:27:29 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:33.673 14:27:32 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:33.673 14:27:32 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=86452f829aec544c70ec019c1adcf6ce 00:26:33.673 14:27:32 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:33.673 14:27:32 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:33.673 14:27:32 -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:33.673 [2024-11-19 14:27:32.211965] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.673 [2024-11-19 14:27:32.212027] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:33.673 [2024-11-19 14:27:32.212043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:33.673 [2024-11-19 14:27:32.212055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.673 [2024-11-19 14:27:32.212082] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.673 [2024-11-19 14:27:32.212092] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:33.673 [2024-11-19 14:27:32.212101] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:33.673 [2024-11-19 14:27:32.212109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.673 [2024-11-19 14:27:32.212130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.673 [2024-11-19 14:27:32.212139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:33.673 [2024-11-19 14:27:32.212155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:33.673 [2024-11-19 14:27:32.212163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.673 [2024-11-19 14:27:32.212236] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.260 ms, result 0 00:26:33.673 true 00:26:33.934 14:27:32 -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:33.934 { 00:26:33.934 "name": "ftl", 00:26:33.934 "properties": [ 00:26:33.934 { 00:26:33.934 "name": "superblock_version", 00:26:33.934 "value": 5, 00:26:33.934 "read-only": true 00:26:33.934 }, 00:26:33.934 { 00:26:33.934 "name": "base_device", 00:26:33.934 "bands": [ 00:26:33.934 { 00:26:33.934 "id": 0, 00:26:33.934 "state": "FREE", 00:26:33.934 "validity": 0.0 00:26:33.934 }, 00:26:33.934 { 00:26:33.934 "id": 1, 00:26:33.934 "state": "FREE", 00:26:33.934 "validity": 0.0 00:26:33.934 }, 00:26:33.934 { 00:26:33.934 "id": 2, 00:26:33.934 "state": "FREE", 00:26:33.934 "validity": 0.0 00:26:33.934 }, 00:26:33.934 { 00:26:33.934 "id": 3, 00:26:33.934 "state": "FREE", 00:26:33.934 "validity": 0.0 00:26:33.934 }, 00:26:33.934 { 00:26:33.934 "id": 4, 00:26:33.934 "state": "FREE", 00:26:33.934 "validity": 0.0 00:26:33.934 }, 00:26:33.934 { 00:26:33.934 "id": 5, 00:26:33.934 "state": "FREE", 00:26:33.934 "validity": 0.0 00:26:33.934 }, 00:26:33.934 { 00:26:33.934 "id": 6, 00:26:33.934 "state": "FREE", 00:26:33.934 "validity": 0.0 00:26:33.934 }, 00:26:33.934 { 00:26:33.934 "id": 7, 00:26:33.934 "state": "FREE", 00:26:33.934 "validity": 0.0 00:26:33.934 }, 00:26:33.934 { 00:26:33.934 "id": 8, 00:26:33.934 "state": "FREE", 00:26:33.934 "validity": 0.0 00:26:33.934 }, 00:26:33.934 { 00:26:33.934 "id": 9, 00:26:33.934 "state": "FREE", 00:26:33.934 "validity": 0.0 00:26:33.934 }, 00:26:33.934 { 00:26:33.934 "id": 10, 00:26:33.934 "state": "FREE", 00:26:33.934 "validity": 0.0 00:26:33.934 }, 00:26:33.934 { 00:26:33.934 "id": 11, 00:26:33.934 "state": "FREE", 00:26:33.934 "validity": 0.0 00:26:33.934 }, 00:26:33.934 { 00:26:33.934 "id": 12, 00:26:33.934 "state": "FREE", 00:26:33.934 "validity": 0.0 00:26:33.934 }, 00:26:33.934 { 00:26:33.934 "id": 13, 00:26:33.934 "state": "FREE", 00:26:33.934 "validity": 0.0 00:26:33.934 }, 00:26:33.934 { 00:26:33.934 "id": 14, 00:26:33.934 "state": "FREE", 00:26:33.934 "validity": 0.0 00:26:33.934 }, 00:26:33.934 { 00:26:33.934 "id": 15, 00:26:33.934 "state": "FREE", 00:26:33.934 "validity": 0.0 00:26:33.934 }, 00:26:33.934 { 00:26:33.934 "id": 16, 00:26:33.934 "state": "FREE", 00:26:33.934 "validity": 0.0 00:26:33.934 }, 00:26:33.934 { 00:26:33.934 "id": 17, 00:26:33.934 "state": "FREE", 00:26:33.934 "validity": 0.0 00:26:33.934 } 00:26:33.934 ], 00:26:33.934 "read-only": true 00:26:33.934 }, 00:26:33.934 { 00:26:33.934 "name": "cache_device", 00:26:33.934 "type": "bdev", 00:26:33.934 "chunks": [ 00:26:33.934 { 00:26:33.934 "id": 0, 00:26:33.934 "state": "CLOSED", 00:26:33.934 "utilization": 1.0 00:26:33.934 }, 00:26:33.934 { 00:26:33.934 "id": 1, 00:26:33.934 "state": "CLOSED", 00:26:33.934 "utilization": 1.0 00:26:33.934 }, 00:26:33.934 { 00:26:33.934 "id": 2, 00:26:33.934 "state": "OPEN", 00:26:33.934 "utilization": 0.001953125 00:26:33.934 }, 00:26:33.934 { 00:26:33.934 "id": 3, 00:26:33.934 "state": "OPEN", 00:26:33.934 "utilization": 0.0 00:26:33.934 } 00:26:33.934 ], 00:26:33.934 "read-only": true 00:26:33.934 }, 00:26:33.934 { 00:26:33.934 "name": "verbose_mode", 00:26:33.934 "value": true, 00:26:33.934 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:33.934 }, 00:26:33.934 { 00:26:33.934 "name": "prep_upgrade_on_shutdown", 00:26:33.934 "value": false, 00:26:33.934 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:33.934 } 00:26:33.934 ] 00:26:33.934 } 00:26:33.934 14:27:32 -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:26:34.197 [2024-11-19 14:27:32.636429] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.197 [2024-11-19 14:27:32.636652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:34.197 [2024-11-19 14:27:32.636676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:26:34.197 [2024-11-19 14:27:32.636685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.197 [2024-11-19 14:27:32.636720] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.197 [2024-11-19 14:27:32.636729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:34.197 [2024-11-19 14:27:32.636738] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:34.197 [2024-11-19 14:27:32.636747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.197 [2024-11-19 14:27:32.636767] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.197 [2024-11-19 14:27:32.636776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:34.197 [2024-11-19 14:27:32.636784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:34.197 [2024-11-19 14:27:32.636791] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.197 [2024-11-19 14:27:32.636858] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.416 ms, result 0 00:26:34.197 true 00:26:34.197 14:27:32 -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:34.197 14:27:32 -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:26:34.197 14:27:32 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:34.459 14:27:32 -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:26:34.459 14:27:32 -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:26:34.459 14:27:32 -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:34.721 [2024-11-19 14:27:33.036826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.721 [2024-11-19 14:27:33.036902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:34.721 [2024-11-19 14:27:33.036914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:34.721 [2024-11-19 14:27:33.036922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.721 [2024-11-19 14:27:33.036945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.721 [2024-11-19 14:27:33.036953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:34.721 [2024-11-19 14:27:33.036961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:34.721 [2024-11-19 14:27:33.036970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.721 [2024-11-19 14:27:33.036990] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.721 [2024-11-19 14:27:33.036997] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:34.721 [2024-11-19 14:27:33.037005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:34.721 [2024-11-19 14:27:33.037011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.721 [2024-11-19 14:27:33.037065] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.225 ms, result 0 00:26:34.721 true 00:26:34.721 14:27:33 -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:34.721 { 00:26:34.721 "name": "ftl", 00:26:34.721 "properties": [ 00:26:34.721 { 00:26:34.721 "name": "superblock_version", 00:26:34.721 "value": 5, 00:26:34.721 "read-only": true 00:26:34.721 }, 00:26:34.721 { 00:26:34.721 "name": "base_device", 00:26:34.721 "bands": [ 00:26:34.721 { 00:26:34.721 "id": 0, 00:26:34.721 "state": "FREE", 00:26:34.721 "validity": 0.0 00:26:34.721 }, 00:26:34.721 { 00:26:34.721 "id": 1, 00:26:34.721 "state": "FREE", 00:26:34.721 "validity": 0.0 00:26:34.721 }, 00:26:34.721 { 00:26:34.721 "id": 2, 00:26:34.721 "state": "FREE", 00:26:34.721 "validity": 0.0 00:26:34.721 }, 00:26:34.721 { 00:26:34.721 "id": 3, 00:26:34.721 "state": "FREE", 00:26:34.721 "validity": 0.0 00:26:34.721 }, 00:26:34.721 { 00:26:34.721 "id": 4, 00:26:34.721 "state": "FREE", 00:26:34.721 "validity": 0.0 00:26:34.721 }, 00:26:34.721 { 00:26:34.721 "id": 5, 00:26:34.721 "state": "FREE", 00:26:34.721 "validity": 0.0 00:26:34.721 }, 00:26:34.721 { 00:26:34.721 "id": 6, 00:26:34.721 "state": "FREE", 00:26:34.721 "validity": 0.0 00:26:34.721 }, 00:26:34.721 { 00:26:34.721 "id": 7, 00:26:34.721 "state": "FREE", 00:26:34.721 "validity": 0.0 00:26:34.721 }, 00:26:34.721 { 00:26:34.721 "id": 8, 00:26:34.721 "state": "FREE", 00:26:34.721 "validity": 0.0 00:26:34.721 }, 00:26:34.721 { 00:26:34.721 "id": 9, 00:26:34.721 "state": "FREE", 00:26:34.721 "validity": 0.0 00:26:34.721 }, 00:26:34.721 { 00:26:34.721 "id": 10, 00:26:34.721 "state": "FREE", 00:26:34.721 "validity": 0.0 00:26:34.721 }, 00:26:34.721 { 00:26:34.721 "id": 11, 00:26:34.721 "state": "FREE", 00:26:34.721 "validity": 0.0 00:26:34.721 }, 00:26:34.721 { 00:26:34.721 "id": 12, 00:26:34.721 "state": "FREE", 00:26:34.721 "validity": 0.0 00:26:34.721 }, 00:26:34.721 { 00:26:34.721 "id": 13, 00:26:34.721 "state": "FREE", 00:26:34.721 "validity": 0.0 00:26:34.721 }, 00:26:34.721 { 00:26:34.721 "id": 14, 00:26:34.721 "state": "FREE", 00:26:34.721 "validity": 0.0 00:26:34.721 }, 00:26:34.721 { 00:26:34.721 "id": 15, 00:26:34.722 "state": "FREE", 00:26:34.722 "validity": 0.0 00:26:34.722 }, 00:26:34.722 { 00:26:34.722 "id": 16, 00:26:34.722 "state": "FREE", 00:26:34.722 "validity": 0.0 00:26:34.722 }, 00:26:34.722 { 00:26:34.722 "id": 17, 00:26:34.722 "state": "FREE", 00:26:34.722 "validity": 0.0 00:26:34.722 } 00:26:34.722 ], 00:26:34.722 "read-only": true 00:26:34.722 }, 00:26:34.722 { 00:26:34.722 "name": "cache_device", 00:26:34.722 "type": "bdev", 00:26:34.722 "chunks": [ 00:26:34.722 { 00:26:34.722 "id": 0, 00:26:34.722 "state": "CLOSED", 00:26:34.722 "utilization": 1.0 00:26:34.722 }, 00:26:34.722 { 00:26:34.722 "id": 1, 00:26:34.722 "state": "CLOSED", 00:26:34.722 "utilization": 1.0 00:26:34.722 }, 00:26:34.722 { 00:26:34.722 "id": 2, 00:26:34.722 "state": "OPEN", 00:26:34.722 "utilization": 0.001953125 00:26:34.722 }, 00:26:34.722 { 00:26:34.722 "id": 3, 00:26:34.722 "state": "OPEN", 00:26:34.722 "utilization": 0.0 00:26:34.722 } 00:26:34.722 ], 00:26:34.722 "read-only": true 00:26:34.722 }, 00:26:34.722 { 00:26:34.722 "name": "verbose_mode", 00:26:34.722 "value": true, 00:26:34.722 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:34.722 }, 00:26:34.722 { 00:26:34.722 "name": "prep_upgrade_on_shutdown", 00:26:34.722 "value": true, 00:26:34.722 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:34.722 } 00:26:34.722 ] 00:26:34.722 } 00:26:34.722 14:27:33 -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:26:34.722 14:27:33 -- ftl/common.sh@130 -- # [[ -n 78614 ]] 00:26:34.722 14:27:33 -- ftl/common.sh@131 -- # killprocess 78614 00:26:34.722 14:27:33 -- common/autotest_common.sh@936 -- # '[' -z 78614 ']' 00:26:34.722 14:27:33 -- common/autotest_common.sh@940 -- # kill -0 78614 00:26:34.722 14:27:33 -- common/autotest_common.sh@941 -- # uname 00:26:34.722 14:27:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:34.722 14:27:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78614 00:26:34.984 killing process with pid 78614 00:26:34.984 14:27:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:34.984 14:27:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:34.984 14:27:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78614' 00:26:34.984 14:27:33 -- common/autotest_common.sh@955 -- # kill 78614 00:26:34.984 14:27:33 -- common/autotest_common.sh@960 -- # wait 78614 00:26:35.620 [2024-11-19 14:27:33.941011] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:26:35.620 [2024-11-19 14:27:33.954153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:35.620 [2024-11-19 14:27:33.954187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:26:35.620 [2024-11-19 14:27:33.954197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:35.620 [2024-11-19 14:27:33.954203] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:35.620 [2024-11-19 14:27:33.954219] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:26:35.620 [2024-11-19 14:27:33.956289] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:35.620 [2024-11-19 14:27:33.956311] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:26:35.620 [2024-11-19 14:27:33.956319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.059 ms 00:26:35.620 [2024-11-19 14:27:33.956325] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.764 [2024-11-19 14:27:41.503102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:43.764 [2024-11-19 14:27:41.503281] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:26:43.765 [2024-11-19 14:27:41.503298] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7546.730 ms 00:26:43.765 [2024-11-19 14:27:41.503305] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.765 [2024-11-19 14:27:41.504271] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:43.765 [2024-11-19 14:27:41.504294] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:26:43.765 [2024-11-19 14:27:41.504302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.948 ms 00:26:43.765 [2024-11-19 14:27:41.504308] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.765 [2024-11-19 14:27:41.505163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:43.765 [2024-11-19 14:27:41.505180] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:26:43.765 [2024-11-19 14:27:41.505187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.835 ms 00:26:43.765 [2024-11-19 14:27:41.505193] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.765 [2024-11-19 14:27:41.513004] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:43.765 [2024-11-19 14:27:41.513030] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:26:43.765 [2024-11-19 14:27:41.513038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.768 ms 00:26:43.765 [2024-11-19 14:27:41.513043] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.765 [2024-11-19 14:27:41.518184] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:43.765 [2024-11-19 14:27:41.518294] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:26:43.765 [2024-11-19 14:27:41.518306] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.115 ms 00:26:43.765 [2024-11-19 14:27:41.518313] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.765 [2024-11-19 14:27:41.518368] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:43.765 [2024-11-19 14:27:41.518375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:26:43.765 [2024-11-19 14:27:41.518382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:26:43.765 [2024-11-19 14:27:41.518392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.765 [2024-11-19 14:27:41.525738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:43.765 [2024-11-19 14:27:41.525840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:26:43.765 [2024-11-19 14:27:41.525851] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.334 ms 00:26:43.765 [2024-11-19 14:27:41.525856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.765 [2024-11-19 14:27:41.533243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:43.765 [2024-11-19 14:27:41.533269] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:26:43.765 [2024-11-19 14:27:41.533275] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.355 ms 00:26:43.765 [2024-11-19 14:27:41.533281] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.765 [2024-11-19 14:27:41.540802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:43.765 [2024-11-19 14:27:41.540916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:26:43.765 [2024-11-19 14:27:41.540927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.496 ms 00:26:43.765 [2024-11-19 14:27:41.540932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.765 [2024-11-19 14:27:41.548255] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:43.765 [2024-11-19 14:27:41.548351] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:26:43.765 [2024-11-19 14:27:41.548361] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.275 ms 00:26:43.765 [2024-11-19 14:27:41.548367] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.765 [2024-11-19 14:27:41.548389] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:26:43.765 [2024-11-19 14:27:41.548399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:43.765 [2024-11-19 14:27:41.548407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:26:43.765 [2024-11-19 14:27:41.548413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:26:43.765 [2024-11-19 14:27:41.548419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:43.765 [2024-11-19 14:27:41.548425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:43.765 [2024-11-19 14:27:41.548430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:43.765 [2024-11-19 14:27:41.548436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:43.765 [2024-11-19 14:27:41.548442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:43.765 [2024-11-19 14:27:41.548448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:43.765 [2024-11-19 14:27:41.548453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:43.765 [2024-11-19 14:27:41.548459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:43.765 [2024-11-19 14:27:41.548465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:43.765 [2024-11-19 14:27:41.548470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:43.765 [2024-11-19 14:27:41.548476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:43.765 [2024-11-19 14:27:41.548481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:43.765 [2024-11-19 14:27:41.548493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:43.765 [2024-11-19 14:27:41.548499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:43.765 [2024-11-19 14:27:41.548504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:43.765 [2024-11-19 14:27:41.548512] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:26:43.765 [2024-11-19 14:27:41.548518] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 37802868-17ca-49ed-ba1a-3910b4de473b 00:26:43.765 [2024-11-19 14:27:41.548524] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:26:43.765 [2024-11-19 14:27:41.548529] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:26:43.765 [2024-11-19 14:27:41.548534] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:26:43.765 [2024-11-19 14:27:41.548540] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:26:43.765 [2024-11-19 14:27:41.548546] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:26:43.765 [2024-11-19 14:27:41.548552] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:26:43.765 [2024-11-19 14:27:41.548560] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:26:43.765 [2024-11-19 14:27:41.548565] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:26:43.765 [2024-11-19 14:27:41.548570] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:26:43.765 [2024-11-19 14:27:41.548576] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:43.765 [2024-11-19 14:27:41.548581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:26:43.765 [2024-11-19 14:27:41.548587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.187 ms 00:26:43.765 [2024-11-19 14:27:41.548593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.765 [2024-11-19 14:27:41.558043] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:43.765 [2024-11-19 14:27:41.558067] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:26:43.765 [2024-11-19 14:27:41.558075] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.437 ms 00:26:43.765 [2024-11-19 14:27:41.558081] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.765 [2024-11-19 14:27:41.558242] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:43.765 [2024-11-19 14:27:41.558248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:26:43.765 [2024-11-19 14:27:41.558254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.134 ms 00:26:43.765 [2024-11-19 14:27:41.558259] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.765 [2024-11-19 14:27:41.593126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:43.765 [2024-11-19 14:27:41.593154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:43.765 [2024-11-19 14:27:41.593162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:43.765 [2024-11-19 14:27:41.593171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.765 [2024-11-19 14:27:41.593195] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:43.765 [2024-11-19 14:27:41.593201] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:43.765 [2024-11-19 14:27:41.593206] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:43.765 [2024-11-19 14:27:41.593211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.765 [2024-11-19 14:27:41.593257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:43.765 [2024-11-19 14:27:41.593263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:43.765 [2024-11-19 14:27:41.593269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:43.765 [2024-11-19 14:27:41.593275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.765 [2024-11-19 14:27:41.593288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:43.765 [2024-11-19 14:27:41.593294] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:43.765 [2024-11-19 14:27:41.593300] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:43.765 [2024-11-19 14:27:41.593305] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.765 [2024-11-19 14:27:41.652181] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:43.765 [2024-11-19 14:27:41.652212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:43.765 [2024-11-19 14:27:41.652220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:43.765 [2024-11-19 14:27:41.652227] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.765 [2024-11-19 14:27:41.674907] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:43.765 [2024-11-19 14:27:41.675015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:43.765 [2024-11-19 14:27:41.675027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:43.765 [2024-11-19 14:27:41.675034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.765 [2024-11-19 14:27:41.675077] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:43.765 [2024-11-19 14:27:41.675085] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:43.765 [2024-11-19 14:27:41.675091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:43.766 [2024-11-19 14:27:41.675096] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.766 [2024-11-19 14:27:41.675127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:43.766 [2024-11-19 14:27:41.675138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:43.766 [2024-11-19 14:27:41.675144] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:43.766 [2024-11-19 14:27:41.675150] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.766 [2024-11-19 14:27:41.675219] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:43.766 [2024-11-19 14:27:41.675226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:43.766 [2024-11-19 14:27:41.675233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:43.766 [2024-11-19 14:27:41.675238] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.766 [2024-11-19 14:27:41.675263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:43.766 [2024-11-19 14:27:41.675270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:26:43.766 [2024-11-19 14:27:41.675278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:43.766 [2024-11-19 14:27:41.675284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.766 [2024-11-19 14:27:41.675311] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:43.766 [2024-11-19 14:27:41.675317] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:43.766 [2024-11-19 14:27:41.675323] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:43.766 [2024-11-19 14:27:41.675329] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.766 [2024-11-19 14:27:41.675362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:43.766 [2024-11-19 14:27:41.675371] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:43.766 [2024-11-19 14:27:41.675377] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:43.766 [2024-11-19 14:27:41.675382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.766 [2024-11-19 14:27:41.675470] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7721.269 ms, result 0 00:26:47.071 14:27:45 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:26:47.071 14:27:45 -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:26:47.071 14:27:45 -- ftl/common.sh@81 -- # local base_bdev= 00:26:47.071 14:27:45 -- ftl/common.sh@82 -- # local cache_bdev= 00:26:47.071 14:27:45 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:47.071 14:27:45 -- ftl/common.sh@89 -- # spdk_tgt_pid=79197 00:26:47.071 14:27:45 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:47.071 14:27:45 -- ftl/common.sh@91 -- # waitforlisten 79197 00:26:47.071 14:27:45 -- common/autotest_common.sh@829 -- # '[' -z 79197 ']' 00:26:47.071 14:27:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:47.071 14:27:45 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:47.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:47.071 14:27:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:47.071 14:27:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:47.071 14:27:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:47.071 14:27:45 -- common/autotest_common.sh@10 -- # set +x 00:26:47.071 [2024-11-19 14:27:45.250741] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:47.071 [2024-11-19 14:27:45.251054] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79197 ] 00:26:47.071 [2024-11-19 14:27:45.398521] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.071 [2024-11-19 14:27:45.536722] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:47.071 [2024-11-19 14:27:45.536899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:47.643 [2024-11-19 14:27:46.069942] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:47.643 [2024-11-19 14:27:46.070159] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:47.906 [2024-11-19 14:27:46.206149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.906 [2024-11-19 14:27:46.206183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:47.906 [2024-11-19 14:27:46.206193] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:47.906 [2024-11-19 14:27:46.206200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.906 [2024-11-19 14:27:46.206238] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.906 [2024-11-19 14:27:46.206247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:47.906 [2024-11-19 14:27:46.206253] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:26:47.906 [2024-11-19 14:27:46.206259] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.906 [2024-11-19 14:27:46.206273] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:47.906 [2024-11-19 14:27:46.206813] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:47.906 [2024-11-19 14:27:46.206836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.906 [2024-11-19 14:27:46.206842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:47.906 [2024-11-19 14:27:46.206848] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.566 ms 00:26:47.906 [2024-11-19 14:27:46.206853] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.906 [2024-11-19 14:27:46.207824] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:47.906 [2024-11-19 14:27:46.217570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.907 [2024-11-19 14:27:46.217692] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:47.907 [2024-11-19 14:27:46.217707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.747 ms 00:26:47.907 [2024-11-19 14:27:46.217713] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.907 [2024-11-19 14:27:46.217762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.907 [2024-11-19 14:27:46.217770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:47.907 [2024-11-19 14:27:46.217776] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:26:47.907 [2024-11-19 14:27:46.217781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.907 [2024-11-19 14:27:46.222231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.907 [2024-11-19 14:27:46.222255] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:47.907 [2024-11-19 14:27:46.222263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.402 ms 00:26:47.907 [2024-11-19 14:27:46.222272] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.907 [2024-11-19 14:27:46.222299] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.907 [2024-11-19 14:27:46.222305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:47.907 [2024-11-19 14:27:46.222312] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:26:47.907 [2024-11-19 14:27:46.222317] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.907 [2024-11-19 14:27:46.222351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.907 [2024-11-19 14:27:46.222358] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:47.907 [2024-11-19 14:27:46.222364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:47.907 [2024-11-19 14:27:46.222369] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.907 [2024-11-19 14:27:46.222390] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:47.907 [2024-11-19 14:27:46.225155] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.907 [2024-11-19 14:27:46.225179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:47.907 [2024-11-19 14:27:46.225188] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.772 ms 00:26:47.907 [2024-11-19 14:27:46.225194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.907 [2024-11-19 14:27:46.225214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.907 [2024-11-19 14:27:46.225220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:47.907 [2024-11-19 14:27:46.225226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:47.907 [2024-11-19 14:27:46.225232] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.907 [2024-11-19 14:27:46.225248] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:47.907 [2024-11-19 14:27:46.225261] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:26:47.907 [2024-11-19 14:27:46.225286] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:47.907 [2024-11-19 14:27:46.225300] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:26:47.907 [2024-11-19 14:27:46.225356] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:26:47.907 [2024-11-19 14:27:46.225363] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:47.907 [2024-11-19 14:27:46.225371] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:26:47.907 [2024-11-19 14:27:46.225378] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:47.907 [2024-11-19 14:27:46.225384] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:47.907 [2024-11-19 14:27:46.225390] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:47.907 [2024-11-19 14:27:46.225398] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:47.907 [2024-11-19 14:27:46.225404] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:26:47.907 [2024-11-19 14:27:46.225411] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:26:47.907 [2024-11-19 14:27:46.225416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.907 [2024-11-19 14:27:46.225422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:47.907 [2024-11-19 14:27:46.225427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.170 ms 00:26:47.907 [2024-11-19 14:27:46.225432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.907 [2024-11-19 14:27:46.225480] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.907 [2024-11-19 14:27:46.225486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:47.907 [2024-11-19 14:27:46.225491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:26:47.907 [2024-11-19 14:27:46.225497] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.907 [2024-11-19 14:27:46.225554] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:47.907 [2024-11-19 14:27:46.225561] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:47.907 [2024-11-19 14:27:46.225567] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:47.907 [2024-11-19 14:27:46.225572] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:47.907 [2024-11-19 14:27:46.225578] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:47.907 [2024-11-19 14:27:46.225583] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:47.907 [2024-11-19 14:27:46.225588] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:47.907 [2024-11-19 14:27:46.225593] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:47.907 [2024-11-19 14:27:46.225599] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:47.907 [2024-11-19 14:27:46.225604] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:47.907 [2024-11-19 14:27:46.225609] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:47.907 [2024-11-19 14:27:46.225614] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:47.907 [2024-11-19 14:27:46.225620] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:47.907 [2024-11-19 14:27:46.225626] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:47.907 [2024-11-19 14:27:46.225631] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:26:47.907 [2024-11-19 14:27:46.225636] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:47.907 [2024-11-19 14:27:46.225641] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:47.907 [2024-11-19 14:27:46.225646] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:26:47.907 [2024-11-19 14:27:46.225650] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:47.907 [2024-11-19 14:27:46.225655] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:26:47.907 [2024-11-19 14:27:46.225660] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:26:47.907 [2024-11-19 14:27:46.225665] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:26:47.907 [2024-11-19 14:27:46.225670] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:47.907 [2024-11-19 14:27:46.225675] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:47.907 [2024-11-19 14:27:46.225679] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:47.907 [2024-11-19 14:27:46.225684] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:47.907 [2024-11-19 14:27:46.225689] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:26:47.907 [2024-11-19 14:27:46.225693] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:47.907 [2024-11-19 14:27:46.225698] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:47.907 [2024-11-19 14:27:46.225703] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:47.907 [2024-11-19 14:27:46.225708] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:47.907 [2024-11-19 14:27:46.225712] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:47.907 [2024-11-19 14:27:46.225717] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:26:47.908 [2024-11-19 14:27:46.225722] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:47.908 [2024-11-19 14:27:46.225727] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:47.908 [2024-11-19 14:27:46.225731] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:47.908 [2024-11-19 14:27:46.225736] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:47.908 [2024-11-19 14:27:46.225741] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:47.908 [2024-11-19 14:27:46.225746] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:26:47.908 [2024-11-19 14:27:46.225751] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:47.908 [2024-11-19 14:27:46.225755] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:47.908 [2024-11-19 14:27:46.225760] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:47.908 [2024-11-19 14:27:46.225765] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:47.908 [2024-11-19 14:27:46.225771] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:47.908 [2024-11-19 14:27:46.225778] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:47.908 [2024-11-19 14:27:46.225784] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:47.908 [2024-11-19 14:27:46.225789] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:47.908 [2024-11-19 14:27:46.225794] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:47.908 [2024-11-19 14:27:46.225798] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:47.908 [2024-11-19 14:27:46.225804] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:47.908 [2024-11-19 14:27:46.225810] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:47.908 [2024-11-19 14:27:46.225817] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:47.908 [2024-11-19 14:27:46.225825] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:47.908 [2024-11-19 14:27:46.225831] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:26:47.908 [2024-11-19 14:27:46.225836] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:26:47.908 [2024-11-19 14:27:46.225841] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:26:47.908 [2024-11-19 14:27:46.225846] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:26:47.908 [2024-11-19 14:27:46.225855] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:26:47.908 [2024-11-19 14:27:46.225861] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:26:47.908 [2024-11-19 14:27:46.225866] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:26:47.908 [2024-11-19 14:27:46.225872] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:26:47.908 [2024-11-19 14:27:46.225894] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:26:47.908 [2024-11-19 14:27:46.225899] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:26:47.908 [2024-11-19 14:27:46.225905] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:26:47.908 [2024-11-19 14:27:46.225910] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:26:47.908 [2024-11-19 14:27:46.225915] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:47.908 [2024-11-19 14:27:46.225921] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:47.908 [2024-11-19 14:27:46.225927] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:47.908 [2024-11-19 14:27:46.225932] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:47.908 [2024-11-19 14:27:46.225938] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:47.908 [2024-11-19 14:27:46.225944] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:47.908 [2024-11-19 14:27:46.225950] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.908 [2024-11-19 14:27:46.225955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:47.908 [2024-11-19 14:27:46.225961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.429 ms 00:26:47.908 [2024-11-19 14:27:46.225966] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.908 [2024-11-19 14:27:46.237705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.908 [2024-11-19 14:27:46.237735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:47.908 [2024-11-19 14:27:46.237742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.707 ms 00:26:47.908 [2024-11-19 14:27:46.237748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.908 [2024-11-19 14:27:46.237776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.908 [2024-11-19 14:27:46.237782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:47.908 [2024-11-19 14:27:46.237787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:47.908 [2024-11-19 14:27:46.237793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.908 [2024-11-19 14:27:46.261842] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.908 [2024-11-19 14:27:46.261868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:47.908 [2024-11-19 14:27:46.261889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 24.011 ms 00:26:47.908 [2024-11-19 14:27:46.261895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.908 [2024-11-19 14:27:46.261915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.908 [2024-11-19 14:27:46.261922] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:47.908 [2024-11-19 14:27:46.261929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:47.908 [2024-11-19 14:27:46.261935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.908 [2024-11-19 14:27:46.262246] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.908 [2024-11-19 14:27:46.262261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:47.908 [2024-11-19 14:27:46.262268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.275 ms 00:26:47.908 [2024-11-19 14:27:46.262274] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.908 [2024-11-19 14:27:46.262304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.908 [2024-11-19 14:27:46.262310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:47.908 [2024-11-19 14:27:46.262316] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:26:47.908 [2024-11-19 14:27:46.262322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.908 [2024-11-19 14:27:46.274286] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.908 [2024-11-19 14:27:46.274310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:47.908 [2024-11-19 14:27:46.274317] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.948 ms 00:26:47.908 [2024-11-19 14:27:46.274323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.908 [2024-11-19 14:27:46.284131] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:47.908 [2024-11-19 14:27:46.284242] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:47.908 [2024-11-19 14:27:46.284253] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.908 [2024-11-19 14:27:46.284259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:26:47.908 [2024-11-19 14:27:46.284266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.855 ms 00:26:47.908 [2024-11-19 14:27:46.284277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.908 [2024-11-19 14:27:46.294780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.908 [2024-11-19 14:27:46.294807] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:26:47.908 [2024-11-19 14:27:46.294815] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.476 ms 00:26:47.908 [2024-11-19 14:27:46.294822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.908 [2024-11-19 14:27:46.303456] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.908 [2024-11-19 14:27:46.303480] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:26:47.908 [2024-11-19 14:27:46.303488] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.602 ms 00:26:47.908 [2024-11-19 14:27:46.303493] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.908 [2024-11-19 14:27:46.312380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.908 [2024-11-19 14:27:46.312415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:26:47.908 [2024-11-19 14:27:46.312423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.859 ms 00:26:47.908 [2024-11-19 14:27:46.312428] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.908 [2024-11-19 14:27:46.312708] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.908 [2024-11-19 14:27:46.312719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:47.908 [2024-11-19 14:27:46.312727] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.209 ms 00:26:47.908 [2024-11-19 14:27:46.312734] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.908 [2024-11-19 14:27:46.359455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.908 [2024-11-19 14:27:46.359485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:47.908 [2024-11-19 14:27:46.359494] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 46.706 ms 00:26:47.908 [2024-11-19 14:27:46.359500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.908 [2024-11-19 14:27:46.367364] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:47.908 [2024-11-19 14:27:46.367926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.908 [2024-11-19 14:27:46.367948] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:47.908 [2024-11-19 14:27:46.367956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.392 ms 00:26:47.908 [2024-11-19 14:27:46.367964] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.909 [2024-11-19 14:27:46.368008] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.909 [2024-11-19 14:27:46.368015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:26:47.909 [2024-11-19 14:27:46.368022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:47.909 [2024-11-19 14:27:46.368027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.909 [2024-11-19 14:27:46.368058] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.909 [2024-11-19 14:27:46.368065] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:47.909 [2024-11-19 14:27:46.368076] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:26:47.909 [2024-11-19 14:27:46.368081] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.909 [2024-11-19 14:27:46.369037] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.909 [2024-11-19 14:27:46.369062] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:26:47.909 [2024-11-19 14:27:46.369069] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.940 ms 00:26:47.909 [2024-11-19 14:27:46.369075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.909 [2024-11-19 14:27:46.369095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.909 [2024-11-19 14:27:46.369101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:47.909 [2024-11-19 14:27:46.369108] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:47.909 [2024-11-19 14:27:46.369113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.909 [2024-11-19 14:27:46.369141] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:47.909 [2024-11-19 14:27:46.369149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.909 [2024-11-19 14:27:46.369156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:47.909 [2024-11-19 14:27:46.369162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:26:47.909 [2024-11-19 14:27:46.369168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.909 [2024-11-19 14:27:46.386890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.909 [2024-11-19 14:27:46.386918] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:47.909 [2024-11-19 14:27:46.386926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.708 ms 00:26:47.909 [2024-11-19 14:27:46.386932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.909 [2024-11-19 14:27:46.386989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.909 [2024-11-19 14:27:46.386996] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:47.909 [2024-11-19 14:27:46.387002] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:26:47.909 [2024-11-19 14:27:46.387008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.909 [2024-11-19 14:27:46.387728] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 181.257 ms, result 0 00:26:47.909 [2024-11-19 14:27:46.403148] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:47.909 [2024-11-19 14:27:46.419166] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:26:47.909 [2024-11-19 14:27:46.427256] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:48.481 14:27:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:48.481 14:27:46 -- common/autotest_common.sh@862 -- # return 0 00:26:48.481 14:27:46 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:48.481 14:27:46 -- ftl/common.sh@95 -- # return 0 00:26:48.481 14:27:46 -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:48.481 [2024-11-19 14:27:46.916159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.481 [2024-11-19 14:27:46.916192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:48.481 [2024-11-19 14:27:46.916203] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:48.481 [2024-11-19 14:27:46.916209] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.481 [2024-11-19 14:27:46.916226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.481 [2024-11-19 14:27:46.916233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:48.481 [2024-11-19 14:27:46.916239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:48.481 [2024-11-19 14:27:46.916247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.481 [2024-11-19 14:27:46.916262] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.481 [2024-11-19 14:27:46.916268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:48.481 [2024-11-19 14:27:46.916274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:48.481 [2024-11-19 14:27:46.916279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.481 [2024-11-19 14:27:46.916323] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.159 ms, result 0 00:26:48.481 true 00:26:48.481 14:27:46 -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:48.742 { 00:26:48.742 "name": "ftl", 00:26:48.742 "properties": [ 00:26:48.742 { 00:26:48.742 "name": "superblock_version", 00:26:48.742 "value": 5, 00:26:48.742 "read-only": true 00:26:48.742 }, 00:26:48.742 { 00:26:48.742 "name": "base_device", 00:26:48.742 "bands": [ 00:26:48.742 { 00:26:48.742 "id": 0, 00:26:48.742 "state": "CLOSED", 00:26:48.742 "validity": 1.0 00:26:48.742 }, 00:26:48.742 { 00:26:48.742 "id": 1, 00:26:48.742 "state": "CLOSED", 00:26:48.742 "validity": 1.0 00:26:48.742 }, 00:26:48.742 { 00:26:48.742 "id": 2, 00:26:48.742 "state": "CLOSED", 00:26:48.742 "validity": 0.007843137254901933 00:26:48.742 }, 00:26:48.742 { 00:26:48.742 "id": 3, 00:26:48.742 "state": "FREE", 00:26:48.742 "validity": 0.0 00:26:48.742 }, 00:26:48.742 { 00:26:48.742 "id": 4, 00:26:48.742 "state": "FREE", 00:26:48.742 "validity": 0.0 00:26:48.742 }, 00:26:48.742 { 00:26:48.742 "id": 5, 00:26:48.742 "state": "FREE", 00:26:48.742 "validity": 0.0 00:26:48.742 }, 00:26:48.742 { 00:26:48.742 "id": 6, 00:26:48.742 "state": "FREE", 00:26:48.743 "validity": 0.0 00:26:48.743 }, 00:26:48.743 { 00:26:48.743 "id": 7, 00:26:48.743 "state": "FREE", 00:26:48.743 "validity": 0.0 00:26:48.743 }, 00:26:48.743 { 00:26:48.743 "id": 8, 00:26:48.743 "state": "FREE", 00:26:48.743 "validity": 0.0 00:26:48.743 }, 00:26:48.743 { 00:26:48.743 "id": 9, 00:26:48.743 "state": "FREE", 00:26:48.743 "validity": 0.0 00:26:48.743 }, 00:26:48.743 { 00:26:48.743 "id": 10, 00:26:48.743 "state": "FREE", 00:26:48.743 "validity": 0.0 00:26:48.743 }, 00:26:48.743 { 00:26:48.743 "id": 11, 00:26:48.743 "state": "FREE", 00:26:48.743 "validity": 0.0 00:26:48.743 }, 00:26:48.743 { 00:26:48.743 "id": 12, 00:26:48.743 "state": "FREE", 00:26:48.743 "validity": 0.0 00:26:48.743 }, 00:26:48.743 { 00:26:48.743 "id": 13, 00:26:48.743 "state": "FREE", 00:26:48.743 "validity": 0.0 00:26:48.743 }, 00:26:48.743 { 00:26:48.743 "id": 14, 00:26:48.743 "state": "FREE", 00:26:48.743 "validity": 0.0 00:26:48.743 }, 00:26:48.743 { 00:26:48.743 "id": 15, 00:26:48.743 "state": "FREE", 00:26:48.743 "validity": 0.0 00:26:48.743 }, 00:26:48.743 { 00:26:48.743 "id": 16, 00:26:48.743 "state": "FREE", 00:26:48.743 "validity": 0.0 00:26:48.743 }, 00:26:48.743 { 00:26:48.743 "id": 17, 00:26:48.743 "state": "FREE", 00:26:48.743 "validity": 0.0 00:26:48.743 } 00:26:48.743 ], 00:26:48.743 "read-only": true 00:26:48.743 }, 00:26:48.743 { 00:26:48.743 "name": "cache_device", 00:26:48.743 "type": "bdev", 00:26:48.743 "chunks": [ 00:26:48.743 { 00:26:48.743 "id": 0, 00:26:48.743 "state": "OPEN", 00:26:48.743 "utilization": 0.0 00:26:48.743 }, 00:26:48.743 { 00:26:48.743 "id": 1, 00:26:48.743 "state": "OPEN", 00:26:48.743 "utilization": 0.0 00:26:48.743 }, 00:26:48.743 { 00:26:48.743 "id": 2, 00:26:48.743 "state": "FREE", 00:26:48.743 "utilization": 0.0 00:26:48.743 }, 00:26:48.743 { 00:26:48.743 "id": 3, 00:26:48.743 "state": "FREE", 00:26:48.743 "utilization": 0.0 00:26:48.743 } 00:26:48.743 ], 00:26:48.743 "read-only": true 00:26:48.743 }, 00:26:48.743 { 00:26:48.743 "name": "verbose_mode", 00:26:48.743 "value": true, 00:26:48.743 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:48.743 }, 00:26:48.743 { 00:26:48.743 "name": "prep_upgrade_on_shutdown", 00:26:48.743 "value": false, 00:26:48.743 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:48.743 } 00:26:48.743 ] 00:26:48.743 } 00:26:48.743 14:27:47 -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:26:48.743 14:27:47 -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:48.743 14:27:47 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:49.004 14:27:47 -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:26:49.004 14:27:47 -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:26:49.004 14:27:47 -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:26:49.004 14:27:47 -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:26:49.004 14:27:47 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:49.004 Validate MD5 checksum, iteration 1 00:26:49.004 14:27:47 -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:26:49.004 14:27:47 -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:26:49.004 14:27:47 -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:26:49.004 14:27:47 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:49.004 14:27:47 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:49.004 14:27:47 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:49.004 14:27:47 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:49.004 14:27:47 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:49.004 14:27:47 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:49.004 14:27:47 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:49.004 14:27:47 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:49.004 14:27:47 -- ftl/common.sh@154 -- # return 0 00:26:49.004 14:27:47 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:49.004 [2024-11-19 14:27:47.558846] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:49.004 [2024-11-19 14:27:47.559139] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79232 ] 00:26:49.265 [2024-11-19 14:27:47.707236] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.525 [2024-11-19 14:27:47.876927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.909  [2024-11-19T14:27:50.412Z] Copying: 607/1024 [MB] (607 MBps) [2024-11-19T14:27:53.718Z] Copying: 1024/1024 [MB] (average 543 MBps) 00:26:55.156 00:26:55.156 14:27:53 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:26:55.156 14:27:53 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:57.074 14:27:55 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:57.074 14:27:55 -- ftl/upgrade_shutdown.sh@103 -- # sum=5b888c98966fe3ce71ecc804bcfc6563 00:26:57.074 14:27:55 -- ftl/upgrade_shutdown.sh@105 -- # [[ 5b888c98966fe3ce71ecc804bcfc6563 != \5\b\8\8\8\c\9\8\9\6\6\f\e\3\c\e\7\1\e\c\c\8\0\4\b\c\f\c\6\5\6\3 ]] 00:26:57.074 14:27:55 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:57.074 14:27:55 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:57.074 Validate MD5 checksum, iteration 2 00:26:57.074 14:27:55 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:26:57.074 14:27:55 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:57.074 14:27:55 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:57.074 14:27:55 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:57.074 14:27:55 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:57.074 14:27:55 -- ftl/common.sh@154 -- # return 0 00:26:57.074 14:27:55 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:57.074 [2024-11-19 14:27:55.594129] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:57.074 [2024-11-19 14:27:55.594359] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79322 ] 00:26:57.335 [2024-11-19 14:27:55.736775] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.597 [2024-11-19 14:27:55.913769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.986  [2024-11-19T14:27:58.493Z] Copying: 592/1024 [MB] (592 MBps) [2024-11-19T14:28:00.408Z] Copying: 1024/1024 [MB] (average 531 MBps) 00:27:01.846 00:27:01.846 14:27:59 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:01.846 14:27:59 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:03.762 14:28:01 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:03.762 14:28:01 -- ftl/upgrade_shutdown.sh@103 -- # sum=86452f829aec544c70ec019c1adcf6ce 00:27:03.762 14:28:01 -- ftl/upgrade_shutdown.sh@105 -- # [[ 86452f829aec544c70ec019c1adcf6ce != \8\6\4\5\2\f\8\2\9\a\e\c\5\4\4\c\7\0\e\c\0\1\9\c\1\a\d\c\f\6\c\e ]] 00:27:03.762 14:28:01 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:03.762 14:28:01 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:03.762 14:28:01 -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:27:03.762 14:28:01 -- ftl/common.sh@137 -- # [[ -n 79197 ]] 00:27:03.762 14:28:01 -- ftl/common.sh@138 -- # kill -9 79197 00:27:03.762 14:28:01 -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:27:03.762 14:28:01 -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:27:03.762 14:28:01 -- ftl/common.sh@81 -- # local base_bdev= 00:27:03.762 14:28:01 -- ftl/common.sh@82 -- # local cache_bdev= 00:27:03.762 14:28:01 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:03.762 14:28:01 -- ftl/common.sh@89 -- # spdk_tgt_pid=79401 00:27:03.762 14:28:01 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:03.762 14:28:01 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:03.762 14:28:01 -- ftl/common.sh@91 -- # waitforlisten 79401 00:27:03.762 14:28:01 -- common/autotest_common.sh@829 -- # '[' -z 79401 ']' 00:27:03.762 14:28:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:03.762 14:28:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:03.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:03.762 14:28:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:03.762 14:28:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:03.762 14:28:01 -- common/autotest_common.sh@10 -- # set +x 00:27:03.762 [2024-11-19 14:28:02.005064] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:03.762 [2024-11-19 14:28:02.005159] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79401 ] 00:27:03.762 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 828: 79197 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:27:03.762 [2024-11-19 14:28:02.146243] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.762 [2024-11-19 14:28:02.283725] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:03.762 [2024-11-19 14:28:02.283901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.335 [2024-11-19 14:28:02.809854] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:04.335 [2024-11-19 14:28:02.809913] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:04.598 [2024-11-19 14:28:02.950210] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.598 [2024-11-19 14:28:02.950245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:04.598 [2024-11-19 14:28:02.950256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:04.598 [2024-11-19 14:28:02.950262] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.598 [2024-11-19 14:28:02.950298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.598 [2024-11-19 14:28:02.950307] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:04.598 [2024-11-19 14:28:02.950314] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:27:04.598 [2024-11-19 14:28:02.950319] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.598 [2024-11-19 14:28:02.950334] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:04.598 [2024-11-19 14:28:02.950900] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:04.598 [2024-11-19 14:28:02.950951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.598 [2024-11-19 14:28:02.950958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:04.598 [2024-11-19 14:28:02.950964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.620 ms 00:27:04.598 [2024-11-19 14:28:02.950970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.598 [2024-11-19 14:28:02.951289] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:04.598 [2024-11-19 14:28:02.963550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.598 [2024-11-19 14:28:02.963578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:04.598 [2024-11-19 14:28:02.963587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.262 ms 00:27:04.598 [2024-11-19 14:28:02.963593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.598 [2024-11-19 14:28:02.970283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.598 [2024-11-19 14:28:02.970396] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:04.598 [2024-11-19 14:28:02.970409] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:04.598 [2024-11-19 14:28:02.970415] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.598 [2024-11-19 14:28:02.970661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.598 [2024-11-19 14:28:02.970669] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:04.598 [2024-11-19 14:28:02.970676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.188 ms 00:27:04.598 [2024-11-19 14:28:02.970681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.598 [2024-11-19 14:28:02.970706] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.598 [2024-11-19 14:28:02.970712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:04.598 [2024-11-19 14:28:02.970718] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:04.598 [2024-11-19 14:28:02.970725] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.598 [2024-11-19 14:28:02.970743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.598 [2024-11-19 14:28:02.970750] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:04.598 [2024-11-19 14:28:02.970755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:04.598 [2024-11-19 14:28:02.970760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.598 [2024-11-19 14:28:02.970779] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:04.598 [2024-11-19 14:28:02.973163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.598 [2024-11-19 14:28:02.973185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:04.598 [2024-11-19 14:28:02.973192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.392 ms 00:27:04.598 [2024-11-19 14:28:02.973198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.598 [2024-11-19 14:28:02.973218] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.598 [2024-11-19 14:28:02.973224] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:04.598 [2024-11-19 14:28:02.973231] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:04.599 [2024-11-19 14:28:02.973236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.599 [2024-11-19 14:28:02.973253] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:04.599 [2024-11-19 14:28:02.973266] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:27:04.599 [2024-11-19 14:28:02.973291] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:04.599 [2024-11-19 14:28:02.973302] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:27:04.599 [2024-11-19 14:28:02.973358] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:27:04.599 [2024-11-19 14:28:02.973367] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:04.599 [2024-11-19 14:28:02.973376] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:27:04.599 [2024-11-19 14:28:02.973383] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:04.599 [2024-11-19 14:28:02.973389] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:04.599 [2024-11-19 14:28:02.973394] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:04.599 [2024-11-19 14:28:02.973400] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:04.599 [2024-11-19 14:28:02.973405] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:27:04.599 [2024-11-19 14:28:02.973410] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:27:04.599 [2024-11-19 14:28:02.973415] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.599 [2024-11-19 14:28:02.973421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:04.599 [2024-11-19 14:28:02.973426] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.164 ms 00:27:04.599 [2024-11-19 14:28:02.973434] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.599 [2024-11-19 14:28:02.973480] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.599 [2024-11-19 14:28:02.973486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:04.599 [2024-11-19 14:28:02.973491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:27:04.599 [2024-11-19 14:28:02.973497] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.599 [2024-11-19 14:28:02.973552] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:04.599 [2024-11-19 14:28:02.973559] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:04.599 [2024-11-19 14:28:02.973565] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:04.599 [2024-11-19 14:28:02.973571] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:04.599 [2024-11-19 14:28:02.973578] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:04.599 [2024-11-19 14:28:02.973583] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:04.599 [2024-11-19 14:28:02.973588] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:04.599 [2024-11-19 14:28:02.973592] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:04.599 [2024-11-19 14:28:02.973598] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:04.599 [2024-11-19 14:28:02.973603] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:04.599 [2024-11-19 14:28:02.973608] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:04.599 [2024-11-19 14:28:02.973614] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:04.599 [2024-11-19 14:28:02.973619] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:04.599 [2024-11-19 14:28:02.973624] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:04.599 [2024-11-19 14:28:02.973629] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:27:04.599 [2024-11-19 14:28:02.973634] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:04.599 [2024-11-19 14:28:02.973639] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:04.599 [2024-11-19 14:28:02.973644] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:27:04.599 [2024-11-19 14:28:02.973649] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:04.599 [2024-11-19 14:28:02.973654] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:27:04.599 [2024-11-19 14:28:02.973658] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:27:04.599 [2024-11-19 14:28:02.973664] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:27:04.599 [2024-11-19 14:28:02.973669] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:04.599 [2024-11-19 14:28:02.973673] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:04.599 [2024-11-19 14:28:02.973678] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:04.599 [2024-11-19 14:28:02.973683] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:04.599 [2024-11-19 14:28:02.973688] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:27:04.599 [2024-11-19 14:28:02.973693] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:04.599 [2024-11-19 14:28:02.973697] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:04.599 [2024-11-19 14:28:02.973702] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:04.599 [2024-11-19 14:28:02.973707] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:04.599 [2024-11-19 14:28:02.973712] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:04.599 [2024-11-19 14:28:02.973716] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:27:04.599 [2024-11-19 14:28:02.973721] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:04.599 [2024-11-19 14:28:02.973725] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:04.599 [2024-11-19 14:28:02.973730] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:04.599 [2024-11-19 14:28:02.973735] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:04.599 [2024-11-19 14:28:02.973739] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:04.599 [2024-11-19 14:28:02.973744] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:27:04.599 [2024-11-19 14:28:02.973749] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:04.599 [2024-11-19 14:28:02.973753] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:04.599 [2024-11-19 14:28:02.973758] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:04.599 [2024-11-19 14:28:02.973763] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:04.599 [2024-11-19 14:28:02.973771] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:04.599 [2024-11-19 14:28:02.973777] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:04.599 [2024-11-19 14:28:02.973782] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:04.599 [2024-11-19 14:28:02.973787] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:04.599 [2024-11-19 14:28:02.973792] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:04.599 [2024-11-19 14:28:02.973796] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:04.599 [2024-11-19 14:28:02.973801] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:04.599 [2024-11-19 14:28:02.973806] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:04.599 [2024-11-19 14:28:02.973813] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:04.599 [2024-11-19 14:28:02.973820] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:04.599 [2024-11-19 14:28:02.973826] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:27:04.599 [2024-11-19 14:28:02.973831] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:27:04.599 [2024-11-19 14:28:02.973840] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:27:04.599 [2024-11-19 14:28:02.973846] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:27:04.599 [2024-11-19 14:28:02.973851] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:27:04.599 [2024-11-19 14:28:02.973856] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:27:04.599 [2024-11-19 14:28:02.973861] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:27:04.599 [2024-11-19 14:28:02.973866] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:27:04.599 [2024-11-19 14:28:02.973871] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:27:04.599 [2024-11-19 14:28:02.974065] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:27:04.599 [2024-11-19 14:28:02.974098] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:27:04.599 [2024-11-19 14:28:02.974121] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:27:04.599 [2024-11-19 14:28:02.974142] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:04.599 [2024-11-19 14:28:02.974201] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:04.599 [2024-11-19 14:28:02.974226] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:04.599 [2024-11-19 14:28:02.974248] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:04.599 [2024-11-19 14:28:02.974270] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:04.599 [2024-11-19 14:28:02.974309] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:04.599 [2024-11-19 14:28:02.974354] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.599 [2024-11-19 14:28:02.974386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:04.599 [2024-11-19 14:28:02.974405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.834 ms 00:27:04.599 [2024-11-19 14:28:02.974423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.600 [2024-11-19 14:28:02.984981] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.600 [2024-11-19 14:28:02.985068] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:04.600 [2024-11-19 14:28:02.985113] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.501 ms 00:27:04.600 [2024-11-19 14:28:02.985130] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.600 [2024-11-19 14:28:02.985167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.600 [2024-11-19 14:28:02.985199] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:04.600 [2024-11-19 14:28:02.985216] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:27:04.600 [2024-11-19 14:28:02.985235] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.600 [2024-11-19 14:28:03.009093] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.600 [2024-11-19 14:28:03.009185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:04.600 [2024-11-19 14:28:03.009223] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.797 ms 00:27:04.600 [2024-11-19 14:28:03.009241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.600 [2024-11-19 14:28:03.009279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.600 [2024-11-19 14:28:03.009297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:04.600 [2024-11-19 14:28:03.009312] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:04.600 [2024-11-19 14:28:03.009326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.600 [2024-11-19 14:28:03.009397] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.600 [2024-11-19 14:28:03.009417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:04.600 [2024-11-19 14:28:03.009424] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:27:04.600 [2024-11-19 14:28:03.009430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.600 [2024-11-19 14:28:03.009456] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.600 [2024-11-19 14:28:03.009464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:04.600 [2024-11-19 14:28:03.009470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:04.600 [2024-11-19 14:28:03.009476] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.600 [2024-11-19 14:28:03.021412] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.600 [2024-11-19 14:28:03.021438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:04.600 [2024-11-19 14:28:03.021446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.920 ms 00:27:04.600 [2024-11-19 14:28:03.021452] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.600 [2024-11-19 14:28:03.021524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.600 [2024-11-19 14:28:03.021532] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:27:04.600 [2024-11-19 14:28:03.021539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:04.600 [2024-11-19 14:28:03.021544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.600 [2024-11-19 14:28:03.034319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.600 [2024-11-19 14:28:03.034346] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:27:04.600 [2024-11-19 14:28:03.034354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.761 ms 00:27:04.600 [2024-11-19 14:28:03.034360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.600 [2024-11-19 14:28:03.041350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.600 [2024-11-19 14:28:03.041375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:04.600 [2024-11-19 14:28:03.041383] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.205 ms 00:27:04.600 [2024-11-19 14:28:03.041389] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.600 [2024-11-19 14:28:03.086561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.600 [2024-11-19 14:28:03.086590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:04.600 [2024-11-19 14:28:03.086599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 45.136 ms 00:27:04.600 [2024-11-19 14:28:03.086605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.600 [2024-11-19 14:28:03.086667] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:27:04.600 [2024-11-19 14:28:03.086700] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:27:04.600 [2024-11-19 14:28:03.086730] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:27:04.600 [2024-11-19 14:28:03.086759] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:27:04.600 [2024-11-19 14:28:03.086765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.600 [2024-11-19 14:28:03.086771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:27:04.600 [2024-11-19 14:28:03.086780] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.130 ms 00:27:04.600 [2024-11-19 14:28:03.086787] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.600 [2024-11-19 14:28:03.086823] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:27:04.600 [2024-11-19 14:28:03.086830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.600 [2024-11-19 14:28:03.086836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:27:04.600 [2024-11-19 14:28:03.086841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:04.600 [2024-11-19 14:28:03.086846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.600 [2024-11-19 14:28:03.097989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.600 [2024-11-19 14:28:03.098015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:27:04.600 [2024-11-19 14:28:03.098023] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.127 ms 00:27:04.600 [2024-11-19 14:28:03.098030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.600 [2024-11-19 14:28:03.104278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.600 [2024-11-19 14:28:03.104374] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:27:04.600 [2024-11-19 14:28:03.104386] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:27:04.600 [2024-11-19 14:28:03.104392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.600 [2024-11-19 14:28:03.104432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.600 [2024-11-19 14:28:03.104439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover unmap map 00:27:04.600 [2024-11-19 14:28:03.104445] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:04.600 [2024-11-19 14:28:03.104450] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.600 [2024-11-19 14:28:03.104556] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 8032, seq id 14 00:27:05.172 [2024-11-19 14:28:03.596447] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 8032, seq id 14 00:27:05.172 [2024-11-19 14:28:03.596584] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 270176, seq id 15 00:27:05.743 [2024-11-19 14:28:04.234503] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 270176, seq id 15 00:27:05.743 [2024-11-19 14:28:04.234605] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:05.743 [2024-11-19 14:28:04.234619] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:05.743 [2024-11-19 14:28:04.234630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.743 [2024-11-19 14:28:04.234640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:27:05.743 [2024-11-19 14:28:04.234653] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1130.160 ms 00:27:05.743 [2024-11-19 14:28:04.234661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.743 [2024-11-19 14:28:04.234704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.743 [2024-11-19 14:28:04.234713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:27:05.743 [2024-11-19 14:28:04.234722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:05.743 [2024-11-19 14:28:04.234730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.743 [2024-11-19 14:28:04.246559] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:05.743 [2024-11-19 14:28:04.246872] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.743 [2024-11-19 14:28:04.246909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:05.743 [2024-11-19 14:28:04.246920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.125 ms 00:27:05.743 [2024-11-19 14:28:04.246928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.743 [2024-11-19 14:28:04.247629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.743 [2024-11-19 14:28:04.247650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from SHM 00:27:05.743 [2024-11-19 14:28:04.247660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.632 ms 00:27:05.743 [2024-11-19 14:28:04.247668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.743 [2024-11-19 14:28:04.249899] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.743 [2024-11-19 14:28:04.249920] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:27:05.743 [2024-11-19 14:28:04.249929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.215 ms 00:27:05.743 [2024-11-19 14:28:04.249936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.743 [2024-11-19 14:28:04.276257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.743 [2024-11-19 14:28:04.276310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Complete unmap transaction 00:27:05.743 [2024-11-19 14:28:04.276322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 26.297 ms 00:27:05.743 [2024-11-19 14:28:04.276330] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.743 [2024-11-19 14:28:04.276446] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.743 [2024-11-19 14:28:04.276458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:05.743 [2024-11-19 14:28:04.276469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:27:05.743 [2024-11-19 14:28:04.276476] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.743 [2024-11-19 14:28:04.277974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.743 [2024-11-19 14:28:04.278019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:27:05.743 [2024-11-19 14:28:04.278030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.478 ms 00:27:05.743 [2024-11-19 14:28:04.278038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.743 [2024-11-19 14:28:04.278073] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.743 [2024-11-19 14:28:04.278082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:05.743 [2024-11-19 14:28:04.278090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:05.743 [2024-11-19 14:28:04.278098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.743 [2024-11-19 14:28:04.278146] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:05.743 [2024-11-19 14:28:04.278157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.743 [2024-11-19 14:28:04.278165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:05.743 [2024-11-19 14:28:04.278176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:27:05.743 [2024-11-19 14:28:04.278184] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.743 [2024-11-19 14:28:04.278242] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.743 [2024-11-19 14:28:04.278251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:05.743 [2024-11-19 14:28:04.278259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:27:05.743 [2024-11-19 14:28:04.278266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.743 [2024-11-19 14:28:04.279405] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1328.679 ms, result 0 00:27:05.743 [2024-11-19 14:28:04.292646] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:06.004 [2024-11-19 14:28:04.308638] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:27:06.004 [2024-11-19 14:28:04.316786] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:06.266 14:28:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:06.266 14:28:04 -- common/autotest_common.sh@862 -- # return 0 00:27:06.266 14:28:04 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:06.266 14:28:04 -- ftl/common.sh@95 -- # return 0 00:27:06.266 14:28:04 -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:27:06.266 14:28:04 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:06.266 14:28:04 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:06.266 Validate MD5 checksum, iteration 1 00:27:06.266 14:28:04 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:06.266 14:28:04 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:06.266 14:28:04 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:06.266 14:28:04 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:06.266 14:28:04 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:06.266 14:28:04 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:06.266 14:28:04 -- ftl/common.sh@154 -- # return 0 00:27:06.266 14:28:04 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:06.267 [2024-11-19 14:28:04.788415] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:06.267 [2024-11-19 14:28:04.788535] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79434 ] 00:27:06.528 [2024-11-19 14:28:04.937637] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.789 [2024-11-19 14:28:05.135923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:08.176  [2024-11-19T14:28:07.682Z] Copying: 530/1024 [MB] (530 MBps) [2024-11-19T14:28:07.942Z] Copying: 994/1024 [MB] (464 MBps) [2024-11-19T14:28:08.880Z] Copying: 1024/1024 [MB] (average 500 MBps) 00:27:10.318 00:27:10.318 14:28:08 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:10.318 14:28:08 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:12.302 14:28:10 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:12.560 Validate MD5 checksum, iteration 2 00:27:12.560 14:28:10 -- ftl/upgrade_shutdown.sh@103 -- # sum=5b888c98966fe3ce71ecc804bcfc6563 00:27:12.560 14:28:10 -- ftl/upgrade_shutdown.sh@105 -- # [[ 5b888c98966fe3ce71ecc804bcfc6563 != \5\b\8\8\8\c\9\8\9\6\6\f\e\3\c\e\7\1\e\c\c\8\0\4\b\c\f\c\6\5\6\3 ]] 00:27:12.560 14:28:10 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:12.560 14:28:10 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:12.560 14:28:10 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:12.560 14:28:10 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:12.560 14:28:10 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:12.560 14:28:10 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:12.560 14:28:10 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:12.560 14:28:10 -- ftl/common.sh@154 -- # return 0 00:27:12.560 14:28:10 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:12.560 [2024-11-19 14:28:10.925099] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:12.560 [2024-11-19 14:28:10.925430] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79502 ] 00:27:12.560 [2024-11-19 14:28:11.074083] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.818 [2024-11-19 14:28:11.238543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:14.193  [2024-11-19T14:28:13.322Z] Copying: 636/1024 [MB] (636 MBps) [2024-11-19T14:28:14.261Z] Copying: 1024/1024 [MB] (average 632 MBps) 00:27:15.699 00:27:15.699 14:28:14 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:15.699 14:28:14 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:18.243 14:28:16 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:18.243 14:28:16 -- ftl/upgrade_shutdown.sh@103 -- # sum=86452f829aec544c70ec019c1adcf6ce 00:27:18.243 14:28:16 -- ftl/upgrade_shutdown.sh@105 -- # [[ 86452f829aec544c70ec019c1adcf6ce != \8\6\4\5\2\f\8\2\9\a\e\c\5\4\4\c\7\0\e\c\0\1\9\c\1\a\d\c\f\6\c\e ]] 00:27:18.243 14:28:16 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:18.243 14:28:16 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:18.243 14:28:16 -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:27:18.243 14:28:16 -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:27:18.243 14:28:16 -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:27:18.243 14:28:16 -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:18.243 14:28:16 -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:27:18.243 14:28:16 -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:27:18.243 14:28:16 -- ftl/common.sh@193 -- # tcp_target_cleanup 00:27:18.243 14:28:16 -- ftl/common.sh@144 -- # tcp_target_shutdown 00:27:18.243 14:28:16 -- ftl/common.sh@130 -- # [[ -n 79401 ]] 00:27:18.243 14:28:16 -- ftl/common.sh@131 -- # killprocess 79401 00:27:18.243 14:28:16 -- common/autotest_common.sh@936 -- # '[' -z 79401 ']' 00:27:18.243 14:28:16 -- common/autotest_common.sh@940 -- # kill -0 79401 00:27:18.243 14:28:16 -- common/autotest_common.sh@941 -- # uname 00:27:18.243 14:28:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:18.243 14:28:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79401 00:27:18.243 killing process with pid 79401 00:27:18.243 14:28:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:18.243 14:28:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:18.243 14:28:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79401' 00:27:18.243 14:28:16 -- common/autotest_common.sh@955 -- # kill 79401 00:27:18.243 14:28:16 -- common/autotest_common.sh@960 -- # wait 79401 00:27:18.505 [2024-11-19 14:28:16.821896] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:27:18.505 [2024-11-19 14:28:16.833180] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.505 [2024-11-19 14:28:16.833217] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:18.505 [2024-11-19 14:28:16.833227] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:18.505 [2024-11-19 14:28:16.833234] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.505 [2024-11-19 14:28:16.833251] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:18.505 [2024-11-19 14:28:16.835371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.505 [2024-11-19 14:28:16.835399] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:18.505 [2024-11-19 14:28:16.835407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.111 ms 00:27:18.505 [2024-11-19 14:28:16.835413] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.505 [2024-11-19 14:28:16.835604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.505 [2024-11-19 14:28:16.835616] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:18.505 [2024-11-19 14:28:16.835622] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.174 ms 00:27:18.505 [2024-11-19 14:28:16.835628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.505 [2024-11-19 14:28:16.836981] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.505 [2024-11-19 14:28:16.837004] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:18.505 [2024-11-19 14:28:16.837012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.340 ms 00:27:18.505 [2024-11-19 14:28:16.837018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.505 [2024-11-19 14:28:16.837868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.505 [2024-11-19 14:28:16.837907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:27:18.505 [2024-11-19 14:28:16.837916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.827 ms 00:27:18.505 [2024-11-19 14:28:16.837922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.505 [2024-11-19 14:28:16.845710] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.505 [2024-11-19 14:28:16.845739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:18.505 [2024-11-19 14:28:16.845747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.761 ms 00:27:18.505 [2024-11-19 14:28:16.845752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.505 [2024-11-19 14:28:16.850232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.505 [2024-11-19 14:28:16.850265] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:18.505 [2024-11-19 14:28:16.850273] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.453 ms 00:27:18.505 [2024-11-19 14:28:16.850279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.505 [2024-11-19 14:28:16.850337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.505 [2024-11-19 14:28:16.850345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:18.505 [2024-11-19 14:28:16.850351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:27:18.505 [2024-11-19 14:28:16.850357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.505 [2024-11-19 14:28:16.857669] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.505 [2024-11-19 14:28:16.857695] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:27:18.505 [2024-11-19 14:28:16.857702] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.300 ms 00:27:18.505 [2024-11-19 14:28:16.857708] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.505 [2024-11-19 14:28:16.866034] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.505 [2024-11-19 14:28:16.866061] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:27:18.505 [2024-11-19 14:28:16.866067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.301 ms 00:27:18.505 [2024-11-19 14:28:16.866073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.505 [2024-11-19 14:28:16.873367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.505 [2024-11-19 14:28:16.873394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:18.505 [2024-11-19 14:28:16.873400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.268 ms 00:27:18.505 [2024-11-19 14:28:16.873405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.505 [2024-11-19 14:28:16.880658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.505 [2024-11-19 14:28:16.880780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:18.505 [2024-11-19 14:28:16.880793] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.207 ms 00:27:18.505 [2024-11-19 14:28:16.880798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.505 [2024-11-19 14:28:16.880822] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:18.505 [2024-11-19 14:28:16.880834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:18.506 [2024-11-19 14:28:16.880846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:18.506 [2024-11-19 14:28:16.880851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:18.506 [2024-11-19 14:28:16.880858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:18.506 [2024-11-19 14:28:16.880864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:18.506 [2024-11-19 14:28:16.880869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:18.506 [2024-11-19 14:28:16.880887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:18.506 [2024-11-19 14:28:16.880893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:18.506 [2024-11-19 14:28:16.880899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:18.506 [2024-11-19 14:28:16.880905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:18.506 [2024-11-19 14:28:16.880911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:18.506 [2024-11-19 14:28:16.880916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:18.506 [2024-11-19 14:28:16.880922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:18.506 [2024-11-19 14:28:16.880928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:18.506 [2024-11-19 14:28:16.880938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:18.506 [2024-11-19 14:28:16.880944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:18.506 [2024-11-19 14:28:16.880949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:18.506 [2024-11-19 14:28:16.880955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:18.506 [2024-11-19 14:28:16.880962] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:18.506 [2024-11-19 14:28:16.880968] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 37802868-17ca-49ed-ba1a-3910b4de473b 00:27:18.506 [2024-11-19 14:28:16.880974] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:18.506 [2024-11-19 14:28:16.880980] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:27:18.506 [2024-11-19 14:28:16.880986] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:27:18.506 [2024-11-19 14:28:16.880992] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:27:18.506 [2024-11-19 14:28:16.880997] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:18.506 [2024-11-19 14:28:16.881003] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:18.506 [2024-11-19 14:28:16.881008] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:18.506 [2024-11-19 14:28:16.881013] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:18.506 [2024-11-19 14:28:16.881018] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:18.506 [2024-11-19 14:28:16.881024] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.506 [2024-11-19 14:28:16.881030] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:18.506 [2024-11-19 14:28:16.881038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.202 ms 00:27:18.506 [2024-11-19 14:28:16.881046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.506 [2024-11-19 14:28:16.890654] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.506 [2024-11-19 14:28:16.890679] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:18.506 [2024-11-19 14:28:16.890687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.595 ms 00:27:18.506 [2024-11-19 14:28:16.890693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.506 [2024-11-19 14:28:16.890840] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.506 [2024-11-19 14:28:16.890846] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:18.506 [2024-11-19 14:28:16.890856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.132 ms 00:27:18.506 [2024-11-19 14:28:16.890862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.506 [2024-11-19 14:28:16.926037] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:18.506 [2024-11-19 14:28:16.926149] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:18.506 [2024-11-19 14:28:16.926162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:18.506 [2024-11-19 14:28:16.926168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.506 [2024-11-19 14:28:16.926193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:18.506 [2024-11-19 14:28:16.926198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:18.506 [2024-11-19 14:28:16.926209] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:18.506 [2024-11-19 14:28:16.926215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.506 [2024-11-19 14:28:16.926270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:18.506 [2024-11-19 14:28:16.926279] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:18.506 [2024-11-19 14:28:16.926285] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:18.506 [2024-11-19 14:28:16.926291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.506 [2024-11-19 14:28:16.926304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:18.506 [2024-11-19 14:28:16.926310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:18.506 [2024-11-19 14:28:16.926315] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:18.506 [2024-11-19 14:28:16.926324] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.506 [2024-11-19 14:28:16.985007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:18.506 [2024-11-19 14:28:16.985122] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:18.506 [2024-11-19 14:28:16.985136] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:18.506 [2024-11-19 14:28:16.985143] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.506 [2024-11-19 14:28:17.007861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:18.506 [2024-11-19 14:28:17.007903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:18.506 [2024-11-19 14:28:17.007916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:18.506 [2024-11-19 14:28:17.007922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.506 [2024-11-19 14:28:17.007967] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:18.506 [2024-11-19 14:28:17.007974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:18.506 [2024-11-19 14:28:17.007980] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:18.506 [2024-11-19 14:28:17.007986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.506 [2024-11-19 14:28:17.008032] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:18.506 [2024-11-19 14:28:17.008039] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:18.506 [2024-11-19 14:28:17.008045] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:18.506 [2024-11-19 14:28:17.008051] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.506 [2024-11-19 14:28:17.008120] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:18.506 [2024-11-19 14:28:17.008128] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:18.506 [2024-11-19 14:28:17.008134] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:18.506 [2024-11-19 14:28:17.008139] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.506 [2024-11-19 14:28:17.008162] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:18.506 [2024-11-19 14:28:17.008169] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:18.506 [2024-11-19 14:28:17.008175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:18.506 [2024-11-19 14:28:17.008181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.506 [2024-11-19 14:28:17.008211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:18.506 [2024-11-19 14:28:17.008218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:18.506 [2024-11-19 14:28:17.008225] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:18.506 [2024-11-19 14:28:17.008231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.506 [2024-11-19 14:28:17.008264] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:18.506 [2024-11-19 14:28:17.008270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:18.506 [2024-11-19 14:28:17.008277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:18.506 [2024-11-19 14:28:17.008283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.506 [2024-11-19 14:28:17.008377] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 175.173 ms, result 0 00:27:19.449 14:28:17 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:19.450 14:28:17 -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:19.450 14:28:17 -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:27:19.450 14:28:17 -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:27:19.450 14:28:17 -- ftl/common.sh@181 -- # [[ -n '' ]] 00:27:19.450 14:28:17 -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:19.450 Remove shared memory files 00:27:19.450 14:28:17 -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:27:19.450 14:28:17 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:19.450 14:28:17 -- ftl/common.sh@205 -- # rm -f rm -f 00:27:19.450 14:28:17 -- ftl/common.sh@206 -- # rm -f rm -f 00:27:19.450 14:28:17 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid79197 00:27:19.450 14:28:17 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:19.450 14:28:17 -- ftl/common.sh@209 -- # rm -f rm -f 00:27:19.450 ************************************ 00:27:19.450 END TEST ftl_upgrade_shutdown 00:27:19.450 ************************************ 00:27:19.450 00:27:19.450 real 1m22.003s 00:27:19.450 user 1m55.171s 00:27:19.450 sys 0m20.781s 00:27:19.450 14:28:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:19.450 14:28:17 -- common/autotest_common.sh@10 -- # set +x 00:27:19.450 14:28:17 -- ftl/ftl.sh@82 -- # '[' -eq 1 ']' 00:27:19.450 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 82: [: -eq: unary operator expected 00:27:19.450 14:28:17 -- ftl/ftl.sh@89 -- # '[' -eq 1 ']' 00:27:19.450 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 89: [: -eq: unary operator expected 00:27:19.450 14:28:17 -- ftl/ftl.sh@1 -- # at_ftl_exit 00:27:19.450 14:28:17 -- ftl/ftl.sh@14 -- # killprocess 70583 00:27:19.450 14:28:17 -- common/autotest_common.sh@936 -- # '[' -z 70583 ']' 00:27:19.450 Process with pid 70583 is not found 00:27:19.450 14:28:17 -- common/autotest_common.sh@940 -- # kill -0 70583 00:27:19.450 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (70583) - No such process 00:27:19.450 14:28:17 -- common/autotest_common.sh@963 -- # echo 'Process with pid 70583 is not found' 00:27:19.450 14:28:17 -- ftl/ftl.sh@17 -- # [[ -n 0000:00:07.0 ]] 00:27:19.450 14:28:17 -- ftl/ftl.sh@19 -- # spdk_tgt_pid=79610 00:27:19.450 14:28:17 -- ftl/ftl.sh@20 -- # waitforlisten 79610 00:27:19.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:19.450 14:28:17 -- common/autotest_common.sh@829 -- # '[' -z 79610 ']' 00:27:19.450 14:28:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:19.450 14:28:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:19.450 14:28:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:19.450 14:28:17 -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:19.450 14:28:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:19.450 14:28:17 -- common/autotest_common.sh@10 -- # set +x 00:27:19.450 [2024-11-19 14:28:17.785359] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:19.450 [2024-11-19 14:28:17.785468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79610 ] 00:27:19.450 [2024-11-19 14:28:17.923335] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.711 [2024-11-19 14:28:18.065470] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:19.711 [2024-11-19 14:28:18.065619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.971 14:28:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:19.971 14:28:18 -- common/autotest_common.sh@862 -- # return 0 00:27:19.971 14:28:18 -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:27:20.231 nvme0n1 00:27:20.231 14:28:18 -- ftl/ftl.sh@22 -- # clear_lvols 00:27:20.231 14:28:18 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:20.231 14:28:18 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:20.492 14:28:18 -- ftl/common.sh@28 -- # stores=37627e24-7f62-4eb5-9d70-b296158c7d7c 00:27:20.492 14:28:18 -- ftl/common.sh@29 -- # for lvs in $stores 00:27:20.492 14:28:18 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 37627e24-7f62-4eb5-9d70-b296158c7d7c 00:27:20.753 14:28:19 -- ftl/ftl.sh@23 -- # killprocess 79610 00:27:20.753 14:28:19 -- common/autotest_common.sh@936 -- # '[' -z 79610 ']' 00:27:20.753 14:28:19 -- common/autotest_common.sh@940 -- # kill -0 79610 00:27:20.753 14:28:19 -- common/autotest_common.sh@941 -- # uname 00:27:20.753 14:28:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:20.753 14:28:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79610 00:27:20.753 killing process with pid 79610 00:27:20.753 14:28:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:20.753 14:28:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:20.753 14:28:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79610' 00:27:20.753 14:28:19 -- common/autotest_common.sh@955 -- # kill 79610 00:27:20.753 14:28:19 -- common/autotest_common.sh@960 -- # wait 79610 00:27:22.139 14:28:20 -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:22.139 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:22.139 Waiting for block devices as requested 00:27:22.139 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:27:22.139 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:27:22.401 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:27:22.401 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:27:27.691 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:27:27.691 Remove shared memory files 00:27:27.691 14:28:25 -- ftl/ftl.sh@28 -- # remove_shm 00:27:27.691 14:28:25 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:27.691 14:28:25 -- ftl/common.sh@205 -- # rm -f rm -f 00:27:27.691 14:28:25 -- ftl/common.sh@206 -- # rm -f rm -f 00:27:27.691 14:28:25 -- ftl/common.sh@207 -- # rm -f rm -f 00:27:27.691 14:28:25 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:27.691 14:28:25 -- ftl/common.sh@209 -- # rm -f rm -f 00:27:27.691 ************************************ 00:27:27.691 END TEST ftl 00:27:27.691 ************************************ 00:27:27.691 00:27:27.691 real 13m3.557s 00:27:27.691 user 15m2.249s 00:27:27.691 sys 1m36.977s 00:27:27.691 14:28:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:27.691 14:28:25 -- common/autotest_common.sh@10 -- # set +x 00:27:27.691 14:28:25 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:27:27.691 14:28:25 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:27:27.691 14:28:25 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:27:27.691 14:28:25 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:27:27.691 14:28:25 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:27:27.691 14:28:25 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:27:27.691 14:28:25 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:27:27.691 14:28:25 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:27:27.691 14:28:25 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:27:27.691 14:28:25 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:27:27.691 14:28:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:27.691 14:28:25 -- common/autotest_common.sh@10 -- # set +x 00:27:27.691 14:28:25 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:27:27.691 14:28:25 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:27:27.691 14:28:25 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:27:27.691 14:28:25 -- common/autotest_common.sh@10 -- # set +x 00:27:29.076 INFO: APP EXITING 00:27:29.076 INFO: killing all VMs 00:27:29.076 INFO: killing vhost app 00:27:29.076 INFO: EXIT DONE 00:27:29.649 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:29.649 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:27:29.649 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:27:29.649 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:27:29.649 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:27:30.594 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:30.594 Cleaning 00:27:30.594 Removing: /var/run/dpdk/spdk0/config 00:27:30.594 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:30.594 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:30.594 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:30.594 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:30.594 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:30.594 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:30.594 Removing: /var/run/dpdk/spdk0 00:27:30.594 Removing: /var/run/dpdk/spdk_pid55985 00:27:30.594 Removing: /var/run/dpdk/spdk_pid56186 00:27:30.594 Removing: /var/run/dpdk/spdk_pid56485 00:27:30.594 Removing: /var/run/dpdk/spdk_pid56584 00:27:30.594 Removing: /var/run/dpdk/spdk_pid56681 00:27:30.594 Removing: /var/run/dpdk/spdk_pid56780 00:27:30.594 Removing: /var/run/dpdk/spdk_pid56878 00:27:30.594 Removing: /var/run/dpdk/spdk_pid56923 00:27:30.594 Removing: /var/run/dpdk/spdk_pid56954 00:27:30.594 Removing: /var/run/dpdk/spdk_pid57029 00:27:30.594 Removing: /var/run/dpdk/spdk_pid57113 00:27:30.594 Removing: /var/run/dpdk/spdk_pid57537 00:27:30.594 Removing: /var/run/dpdk/spdk_pid57603 00:27:30.594 Removing: /var/run/dpdk/spdk_pid57674 00:27:30.594 Removing: /var/run/dpdk/spdk_pid57697 00:27:30.594 Removing: /var/run/dpdk/spdk_pid57796 00:27:30.594 Removing: /var/run/dpdk/spdk_pid57806 00:27:30.594 Removing: /var/run/dpdk/spdk_pid57905 00:27:30.594 Removing: /var/run/dpdk/spdk_pid57921 00:27:30.594 Removing: /var/run/dpdk/spdk_pid57974 00:27:30.594 Removing: /var/run/dpdk/spdk_pid58005 00:27:30.594 Removing: /var/run/dpdk/spdk_pid58058 00:27:30.594 Removing: /var/run/dpdk/spdk_pid58076 00:27:30.594 Removing: /var/run/dpdk/spdk_pid58226 00:27:30.594 Removing: /var/run/dpdk/spdk_pid58268 00:27:30.594 Removing: /var/run/dpdk/spdk_pid58356 00:27:30.594 Removing: /var/run/dpdk/spdk_pid58417 00:27:30.594 Removing: /var/run/dpdk/spdk_pid58448 00:27:30.594 Removing: /var/run/dpdk/spdk_pid58526 00:27:30.594 Removing: /var/run/dpdk/spdk_pid58548 00:27:30.594 Removing: /var/run/dpdk/spdk_pid58588 00:27:30.594 Removing: /var/run/dpdk/spdk_pid58608 00:27:30.594 Removing: /var/run/dpdk/spdk_pid58650 00:27:30.594 Removing: /var/run/dpdk/spdk_pid58681 00:27:30.594 Removing: /var/run/dpdk/spdk_pid58722 00:27:30.594 Removing: /var/run/dpdk/spdk_pid58742 00:27:30.594 Removing: /var/run/dpdk/spdk_pid58783 00:27:30.594 Removing: /var/run/dpdk/spdk_pid58809 00:27:30.594 Removing: /var/run/dpdk/spdk_pid58856 00:27:30.594 Removing: /var/run/dpdk/spdk_pid58882 00:27:30.594 Removing: /var/run/dpdk/spdk_pid58925 00:27:30.594 Removing: /var/run/dpdk/spdk_pid58945 00:27:30.594 Removing: /var/run/dpdk/spdk_pid58986 00:27:30.594 Removing: /var/run/dpdk/spdk_pid59012 00:27:30.594 Removing: /var/run/dpdk/spdk_pid59054 00:27:30.594 Removing: /var/run/dpdk/spdk_pid59077 00:27:30.594 Removing: /var/run/dpdk/spdk_pid59118 00:27:30.594 Removing: /var/run/dpdk/spdk_pid59144 00:27:30.594 Removing: /var/run/dpdk/spdk_pid59185 00:27:30.594 Removing: /var/run/dpdk/spdk_pid59211 00:27:30.594 Removing: /var/run/dpdk/spdk_pid59246 00:27:30.594 Removing: /var/run/dpdk/spdk_pid59267 00:27:30.594 Removing: /var/run/dpdk/spdk_pid59308 00:27:30.594 Removing: /var/run/dpdk/spdk_pid59334 00:27:30.594 Removing: /var/run/dpdk/spdk_pid59375 00:27:30.594 Removing: /var/run/dpdk/spdk_pid59401 00:27:30.594 Removing: /var/run/dpdk/spdk_pid59442 00:27:30.594 Removing: /var/run/dpdk/spdk_pid59468 00:27:30.594 Removing: /var/run/dpdk/spdk_pid59509 00:27:30.594 Removing: /var/run/dpdk/spdk_pid59535 00:27:30.594 Removing: /var/run/dpdk/spdk_pid59578 00:27:30.594 Removing: /var/run/dpdk/spdk_pid59607 00:27:30.594 Removing: /var/run/dpdk/spdk_pid59651 00:27:30.594 Removing: /var/run/dpdk/spdk_pid59680 00:27:30.594 Removing: /var/run/dpdk/spdk_pid59723 00:27:30.594 Removing: /var/run/dpdk/spdk_pid59745 00:27:30.594 Removing: /var/run/dpdk/spdk_pid59786 00:27:30.594 Removing: /var/run/dpdk/spdk_pid59817 00:27:30.594 Removing: /var/run/dpdk/spdk_pid59859 00:27:30.594 Removing: /var/run/dpdk/spdk_pid59947 00:27:30.594 Removing: /var/run/dpdk/spdk_pid60060 00:27:30.594 Removing: /var/run/dpdk/spdk_pid60238 00:27:30.594 Removing: /var/run/dpdk/spdk_pid60324 00:27:30.594 Removing: /var/run/dpdk/spdk_pid60366 00:27:30.594 Removing: /var/run/dpdk/spdk_pid60790 00:27:30.594 Removing: /var/run/dpdk/spdk_pid60998 00:27:30.594 Removing: /var/run/dpdk/spdk_pid61114 00:27:30.594 Removing: /var/run/dpdk/spdk_pid61167 00:27:30.594 Removing: /var/run/dpdk/spdk_pid61192 00:27:30.594 Removing: /var/run/dpdk/spdk_pid61275 00:27:30.594 Removing: /var/run/dpdk/spdk_pid61935 00:27:30.594 Removing: /var/run/dpdk/spdk_pid61977 00:27:30.594 Removing: /var/run/dpdk/spdk_pid62493 00:27:30.594 Removing: /var/run/dpdk/spdk_pid62612 00:27:30.594 Removing: /var/run/dpdk/spdk_pid62721 00:27:30.594 Removing: /var/run/dpdk/spdk_pid62774 00:27:30.594 Removing: /var/run/dpdk/spdk_pid62805 00:27:30.594 Removing: /var/run/dpdk/spdk_pid62836 00:27:30.594 Removing: /var/run/dpdk/spdk_pid64748 00:27:30.594 Removing: /var/run/dpdk/spdk_pid64887 00:27:30.594 Removing: /var/run/dpdk/spdk_pid64891 00:27:30.594 Removing: /var/run/dpdk/spdk_pid64903 00:27:30.594 Removing: /var/run/dpdk/spdk_pid64981 00:27:30.594 Removing: /var/run/dpdk/spdk_pid64985 00:27:30.594 Removing: /var/run/dpdk/spdk_pid64997 00:27:30.594 Removing: /var/run/dpdk/spdk_pid65058 00:27:30.594 Removing: /var/run/dpdk/spdk_pid65062 00:27:30.594 Removing: /var/run/dpdk/spdk_pid65074 00:27:30.859 Removing: /var/run/dpdk/spdk_pid65145 00:27:30.859 Removing: /var/run/dpdk/spdk_pid65150 00:27:30.859 Removing: /var/run/dpdk/spdk_pid65162 00:27:30.859 Removing: /var/run/dpdk/spdk_pid66604 00:27:30.859 Removing: /var/run/dpdk/spdk_pid66715 00:27:30.859 Removing: /var/run/dpdk/spdk_pid66852 00:27:30.859 Removing: /var/run/dpdk/spdk_pid66936 00:27:30.859 Removing: /var/run/dpdk/spdk_pid67018 00:27:30.859 Removing: /var/run/dpdk/spdk_pid67094 00:27:30.859 Removing: /var/run/dpdk/spdk_pid67204 00:27:30.859 Removing: /var/run/dpdk/spdk_pid67273 00:27:30.859 Removing: /var/run/dpdk/spdk_pid67420 00:27:30.859 Removing: /var/run/dpdk/spdk_pid67807 00:27:30.859 Removing: /var/run/dpdk/spdk_pid67844 00:27:30.859 Removing: /var/run/dpdk/spdk_pid68286 00:27:30.859 Removing: /var/run/dpdk/spdk_pid68471 00:27:30.859 Removing: /var/run/dpdk/spdk_pid68571 00:27:30.859 Removing: /var/run/dpdk/spdk_pid68676 00:27:30.859 Removing: /var/run/dpdk/spdk_pid68732 00:27:30.859 Removing: /var/run/dpdk/spdk_pid68762 00:27:30.859 Removing: /var/run/dpdk/spdk_pid69071 00:27:30.859 Removing: /var/run/dpdk/spdk_pid69133 00:27:30.859 Removing: /var/run/dpdk/spdk_pid69212 00:27:30.859 Removing: /var/run/dpdk/spdk_pid69618 00:27:30.859 Removing: /var/run/dpdk/spdk_pid69773 00:27:30.859 Removing: /var/run/dpdk/spdk_pid70583 00:27:30.859 Removing: /var/run/dpdk/spdk_pid70724 00:27:30.859 Removing: /var/run/dpdk/spdk_pid70935 00:27:30.859 Removing: /var/run/dpdk/spdk_pid71033 00:27:30.859 Removing: /var/run/dpdk/spdk_pid71330 00:27:30.859 Removing: /var/run/dpdk/spdk_pid71606 00:27:30.859 Removing: /var/run/dpdk/spdk_pid71967 00:27:30.859 Removing: /var/run/dpdk/spdk_pid72228 00:27:30.859 Removing: /var/run/dpdk/spdk_pid72438 00:27:30.859 Removing: /var/run/dpdk/spdk_pid72484 00:27:30.859 Removing: /var/run/dpdk/spdk_pid72693 00:27:30.859 Removing: /var/run/dpdk/spdk_pid72718 00:27:30.859 Removing: /var/run/dpdk/spdk_pid72771 00:27:30.859 Removing: /var/run/dpdk/spdk_pid73040 00:27:30.859 Removing: /var/run/dpdk/spdk_pid73279 00:27:30.859 Removing: /var/run/dpdk/spdk_pid73889 00:27:30.859 Removing: /var/run/dpdk/spdk_pid74606 00:27:30.859 Removing: /var/run/dpdk/spdk_pid75218 00:27:30.859 Removing: /var/run/dpdk/spdk_pid76024 00:27:30.859 Removing: /var/run/dpdk/spdk_pid76179 00:27:30.859 Removing: /var/run/dpdk/spdk_pid76267 00:27:30.859 Removing: /var/run/dpdk/spdk_pid76757 00:27:30.859 Removing: /var/run/dpdk/spdk_pid76816 00:27:30.859 Removing: /var/run/dpdk/spdk_pid77437 00:27:30.859 Removing: /var/run/dpdk/spdk_pid77890 00:27:30.859 Removing: /var/run/dpdk/spdk_pid78614 00:27:30.859 Removing: /var/run/dpdk/spdk_pid78746 00:27:30.859 Removing: /var/run/dpdk/spdk_pid78805 00:27:30.859 Removing: /var/run/dpdk/spdk_pid78864 00:27:30.859 Removing: /var/run/dpdk/spdk_pid78923 00:27:30.859 Removing: /var/run/dpdk/spdk_pid78994 00:27:30.859 Removing: /var/run/dpdk/spdk_pid79197 00:27:30.859 Removing: /var/run/dpdk/spdk_pid79232 00:27:30.859 Removing: /var/run/dpdk/spdk_pid79322 00:27:30.859 Removing: /var/run/dpdk/spdk_pid79401 00:27:30.859 Removing: /var/run/dpdk/spdk_pid79434 00:27:30.859 Removing: /var/run/dpdk/spdk_pid79502 00:27:30.859 Removing: /var/run/dpdk/spdk_pid79610 00:27:30.859 Clean 00:27:31.119 killing process with pid 48179 00:27:31.119 killing process with pid 48180 00:27:31.119 14:28:29 -- common/autotest_common.sh@1446 -- # return 0 00:27:31.119 14:28:29 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:27:31.119 14:28:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:31.119 14:28:29 -- common/autotest_common.sh@10 -- # set +x 00:27:31.119 14:28:29 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:27:31.119 14:28:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:31.119 14:28:29 -- common/autotest_common.sh@10 -- # set +x 00:27:31.119 14:28:29 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:31.119 14:28:29 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:31.119 14:28:29 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:31.119 14:28:29 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:27:31.119 14:28:29 -- spdk/autotest.sh@383 -- # hostname 00:27:31.119 14:28:29 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:31.380 geninfo: WARNING: invalid characters removed from testname! 00:27:57.966 14:28:52 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:57.966 14:28:55 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:59.880 14:28:57 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:01.795 14:28:59 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:03.711 14:29:02 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:05.147 14:29:03 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:07.700 14:29:05 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:07.700 14:29:05 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:28:07.700 14:29:05 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:28:07.700 14:29:05 -- common/autotest_common.sh@1690 -- $ lcov --version 00:28:07.700 14:29:05 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:28:07.700 14:29:05 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:28:07.700 14:29:05 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:28:07.700 14:29:05 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:28:07.700 14:29:05 -- scripts/common.sh@335 -- $ IFS=.-: 00:28:07.700 14:29:05 -- scripts/common.sh@335 -- $ read -ra ver1 00:28:07.700 14:29:05 -- scripts/common.sh@336 -- $ IFS=.-: 00:28:07.700 14:29:05 -- scripts/common.sh@336 -- $ read -ra ver2 00:28:07.700 14:29:05 -- scripts/common.sh@337 -- $ local 'op=<' 00:28:07.700 14:29:05 -- scripts/common.sh@339 -- $ ver1_l=2 00:28:07.700 14:29:05 -- scripts/common.sh@340 -- $ ver2_l=1 00:28:07.700 14:29:05 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:28:07.700 14:29:05 -- scripts/common.sh@343 -- $ case "$op" in 00:28:07.700 14:29:05 -- scripts/common.sh@344 -- $ : 1 00:28:07.700 14:29:05 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:28:07.700 14:29:05 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:07.700 14:29:05 -- scripts/common.sh@364 -- $ decimal 1 00:28:07.700 14:29:05 -- scripts/common.sh@352 -- $ local d=1 00:28:07.700 14:29:05 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:28:07.700 14:29:05 -- scripts/common.sh@354 -- $ echo 1 00:28:07.700 14:29:05 -- scripts/common.sh@364 -- $ ver1[v]=1 00:28:07.700 14:29:05 -- scripts/common.sh@365 -- $ decimal 2 00:28:07.700 14:29:05 -- scripts/common.sh@352 -- $ local d=2 00:28:07.700 14:29:05 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:28:07.700 14:29:05 -- scripts/common.sh@354 -- $ echo 2 00:28:07.701 14:29:05 -- scripts/common.sh@365 -- $ ver2[v]=2 00:28:07.701 14:29:05 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:28:07.701 14:29:05 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:28:07.701 14:29:05 -- scripts/common.sh@367 -- $ return 0 00:28:07.701 14:29:05 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:07.701 14:29:05 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:28:07.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.701 --rc genhtml_branch_coverage=1 00:28:07.701 --rc genhtml_function_coverage=1 00:28:07.701 --rc genhtml_legend=1 00:28:07.701 --rc geninfo_all_blocks=1 00:28:07.701 --rc geninfo_unexecuted_blocks=1 00:28:07.701 00:28:07.701 ' 00:28:07.701 14:29:05 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:28:07.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.701 --rc genhtml_branch_coverage=1 00:28:07.701 --rc genhtml_function_coverage=1 00:28:07.701 --rc genhtml_legend=1 00:28:07.701 --rc geninfo_all_blocks=1 00:28:07.701 --rc geninfo_unexecuted_blocks=1 00:28:07.701 00:28:07.701 ' 00:28:07.701 14:29:05 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:28:07.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.701 --rc genhtml_branch_coverage=1 00:28:07.701 --rc genhtml_function_coverage=1 00:28:07.701 --rc genhtml_legend=1 00:28:07.701 --rc geninfo_all_blocks=1 00:28:07.701 --rc geninfo_unexecuted_blocks=1 00:28:07.701 00:28:07.701 ' 00:28:07.701 14:29:05 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:28:07.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.701 --rc genhtml_branch_coverage=1 00:28:07.701 --rc genhtml_function_coverage=1 00:28:07.701 --rc genhtml_legend=1 00:28:07.701 --rc geninfo_all_blocks=1 00:28:07.701 --rc geninfo_unexecuted_blocks=1 00:28:07.701 00:28:07.701 ' 00:28:07.701 14:29:05 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:07.701 14:29:05 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:07.701 14:29:05 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:07.701 14:29:05 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:07.701 14:29:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.701 14:29:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.701 14:29:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.701 14:29:05 -- paths/export.sh@5 -- $ export PATH 00:28:07.701 14:29:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.701 14:29:05 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:28:07.701 14:29:05 -- common/autobuild_common.sh@440 -- $ date +%s 00:28:07.701 14:29:05 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732026545.XXXXXX 00:28:07.701 14:29:05 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732026545.mLIxqD 00:28:07.701 14:29:05 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:28:07.701 14:29:05 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:28:07.701 14:29:05 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:28:07.701 14:29:05 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:28:07.701 14:29:05 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:28:07.701 14:29:05 -- common/autobuild_common.sh@456 -- $ get_config_params 00:28:07.701 14:29:05 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:28:07.701 14:29:05 -- common/autotest_common.sh@10 -- $ set +x 00:28:07.701 14:29:05 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:28:07.701 14:29:05 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:28:07.701 14:29:05 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:28:07.701 14:29:05 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:07.701 14:29:05 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:28:07.701 14:29:05 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:07.701 14:29:05 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:07.701 14:29:05 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:07.701 14:29:05 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:07.701 14:29:05 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:07.701 14:29:05 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:07.701 + [[ -n 4996 ]] 00:28:07.701 + sudo kill 4996 00:28:07.712 [Pipeline] } 00:28:07.727 [Pipeline] // timeout 00:28:07.732 [Pipeline] } 00:28:07.746 [Pipeline] // stage 00:28:07.751 [Pipeline] } 00:28:07.766 [Pipeline] // catchError 00:28:07.775 [Pipeline] stage 00:28:07.778 [Pipeline] { (Stop VM) 00:28:07.791 [Pipeline] sh 00:28:08.075 + vagrant halt 00:28:10.621 ==> default: Halting domain... 00:28:17.219 [Pipeline] sh 00:28:17.504 + vagrant destroy -f 00:28:20.049 ==> default: Removing domain... 00:28:20.638 [Pipeline] sh 00:28:20.926 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:28:20.936 [Pipeline] } 00:28:20.949 [Pipeline] // stage 00:28:20.953 [Pipeline] } 00:28:20.964 [Pipeline] // dir 00:28:20.968 [Pipeline] } 00:28:20.982 [Pipeline] // wrap 00:28:20.988 [Pipeline] } 00:28:21.000 [Pipeline] // catchError 00:28:21.007 [Pipeline] stage 00:28:21.009 [Pipeline] { (Epilogue) 00:28:21.021 [Pipeline] sh 00:28:21.308 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:25.517 [Pipeline] catchError 00:28:25.520 [Pipeline] { 00:28:25.555 [Pipeline] sh 00:28:25.844 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:25.844 Artifacts sizes are good 00:28:25.855 [Pipeline] } 00:28:25.871 [Pipeline] // catchError 00:28:25.884 [Pipeline] archiveArtifacts 00:28:25.892 Archiving artifacts 00:28:26.060 [Pipeline] cleanWs 00:28:26.075 [WS-CLEANUP] Deleting project workspace... 00:28:26.075 [WS-CLEANUP] Deferred wipeout is used... 00:28:26.083 [WS-CLEANUP] done 00:28:26.085 [Pipeline] } 00:28:26.102 [Pipeline] // stage 00:28:26.110 [Pipeline] } 00:28:26.124 [Pipeline] // node 00:28:26.129 [Pipeline] End of Pipeline 00:28:26.163 Finished: SUCCESS